• Title/Summary/Keyword: 정형 모델

Search Result 566, Processing Time 0.03 seconds

Verifying a Safe P2P Security Protocol in M2M Communication Environment (M2M 통신환경에서 안전한 P2P 보안 프로토콜 검증)

  • Han, Kun-Hee;Bae, Woo-Sik
    • Journal of Digital Convergence
    • /
    • v.13 no.5
    • /
    • pp.213-218
    • /
    • 2015
  • In parallel with evolving information communication technology, M2M(Machine-to-Machine) industry has implemented multi-functional and high-performance systems, and made great strides with IoT(Internet of Things) and IoE(Internet of Everything). Authentication, confidentiality, anonymity, non-repudiation, data reliability, connectionless and traceability are prerequisites for communication security. Yet, the wireless transmission section in M2M communication is exposed to intruders' attacks. Any security issues attributable to M2M wireless communication protocols may lead to serious concerns including system faults, information leakage and privacy challenges. Therefore, mutual authentication and security are key components of protocol design. Recently, secure communication protocols have been regarded as highly important and explored as such. The present paper draws on hash function, random numbers, secret keys and session keys to design a secure communication protocol. Also, this paper tests the proposed protocol with a formal verification tool, Casper/FDR, to demonstrate its security against a range of intruders' attacks. In brief, the proposed protocol meets the security requirements, addressing the challenges without any problems.

A Study on High-Precision DEM Generation Using ERS-Envisat SAR Cross-Interferometry (ERS-Envisat SAR Cross-Interferomety를 이용한 고정밀 DEM 생성에 관한 연구)

  • Lee, Won-Jin;Jung, Hyung-Sup;Lu, Zhong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.4
    • /
    • pp.431-439
    • /
    • 2010
  • Cross-interferometic synthetic aperture radar (CInSAR) technique from ERS-2 and Envisat images is capable of generating submeter-accuracy digital elevation model (DEM). However, it is very difficult to produce high-quality CInSAR-derived DEM due to the difference in the azimuth and range pixel size between ERS-2 and Envisat images as well as the small height ambiguity of CInSAR interferogram. In this study, we have proposed an efficient method to overcome the problems, produced a high-quality DEM over northern Alaska, and compared the CInSAR-derived DEM with the national elevation dataset (NED) DEM from U.S. Geological Survey. In the proposed method, azimuth common band filtering is applied in the radar raw data processing to mitigate the mis-registation due to the difference in the azimuth and range pixel size, and differential SAR interferogram (DInSAR) is used for reducing the unwrapping error occurred by the high fringe rate of CInSAR interferogram. Using the CInSAR DEM, we have identified and corrected man-made artifacts in the NED DEM. The wave number analysis further confirms that the CInSAR DEM has valid Signal in the high frequency of more than 0.08 radians/m (about 40m) while the NED DEM does not. Our results indicate that the CInSAR DEM is superior to the NED DEM in terms of both height precision and ground resolution.

Automatic Geometric Calibration of KOMPSAT-2 Stereo Pair Data (KOMPSAT-2 입체영상의 자동 기하 보정)

  • Oh, Kwan-Young;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.2
    • /
    • pp.191-202
    • /
    • 2012
  • A high resolution satellite imagery such as KOMPSAT-2 includes a material containing rational polynomial coefficient (RPC) for three-dimensional geopositioning. However, image geometries which are calculated from the RPC must have inevitable systematic errors. Thus, it is necessary to correct systematic errors of the RPC using several ground control points (GCPs). In this paper, we propose an efficient method for automatic correction of image geometries using tie points of a stereo pair and the Shuttle Radar Topography Mission (SRTM) Digital Elevation Model (DEM) without GCPs. This method includes four steps: 1) tie points extraction, 2) determination of the ground coordinates of the tie points, 3) refinement of the ground coordinates using SRTM DEM, and 4) RPC adjustment model parameter estimation. We validates the performance of the proposed method using KOMPSAT-2 stereo pair. The root mean square errors (RMSE) achieved from check points (CPs) were about 3.55 m, 9.70 m and 3.58 m in X, Y;and Z directions. This means that we can automatically correct the systematic error of RPC using SRTM DEM.

Development of Parametric BIM Libraries for Civil Structures using National 2D Standard Drawings (국가 표준도를 이용한 토목 구조물 BIM 파라메트릭 라이브러리 구축에 관한 연구)

  • Kim, Cheong-Woon;Koo, Bonsang
    • Korean Journal of Construction Engineering and Management
    • /
    • v.15 no.4
    • /
    • pp.128-138
    • /
    • 2014
  • Development of infrastructure component libraries is a critical requirement for the accelerated adoption of BIM in the civil engineering sector. Libraries reduce the time for BIM model creation, allows accurate quantity take offs, and shared use of standard models in a project. However, such libraries are currently in very short supply in the domestic infrastructure domain. This research introduces library components for retaining walls and box culverts generated from 2D standard drawings made publicly available by MOLIT. Commercial BIM software was used to create the concrete geometry and rebar, and dimensional/volumetric parameters were defined to maximize the reuse and generality of the libraries. Use of the these libraries in a project context demonstrates that they allow accurate and quick quantity take offs, and easier management of geometric information through the use of a single library as to numerous 2D drawings. It also demonstrates the easy modification of the geometries of the components if and when they need to changed. However, the application also showed that some of the rebar components (stirrups and length wise rebars) do not get properly updated when concrete geometries are changed, demonstrating the limits of current software applications. The research provides evidence of the many advantages of using BIM libraries in the civil engineering, thus providing the incentive for further development of standard libraries and promoting the use of BIM in infrastructure projects.

Conjunctive Boolean Query Optimization based on Join Sequence Separability in Information Retrieval Systems (정보검색시스템에서 조인 시퀀스 분리성 기반 논리곱 불리언 질의 최적화)

  • 박병권;한욱신;황규영
    • Journal of KIISE:Databases
    • /
    • v.31 no.4
    • /
    • pp.395-408
    • /
    • 2004
  • A conjunctive Boolean text query refers to a query that searches for tort documents containing all of the specified keywords, and is the most frequently used query form in information retrieval systems. Typically, the query specifies a long list of keywords for better precision, and in this case, the order of keyword processing has a significant impact on the query speed. Currently known approaches to this ordering are based on heuristics and, therefore, cannot guarantee an optimal ordering. We can use a systematic approach by leveraging a database query processing algorithm like the dynamic programming, but it is not suitable for a text query with a typically long list of keywords because of the algorithm's exponential run-time (Ο(n2$^{n-1}$)) for n keywords. Considering these problems, we propose a new approach based on a property called the join sequence separability. This property states that the optimal join sequence is separable into two subsequences of different join methods under a certain condition on the joined relations, and this property enables us to find a globally optimal join sequence in Ο(n2$^{n-1}$). In this paper we describe the property formally, present an optimization algorithm based on the property, prove that the algorithm finds an optimal join sequence, and validate our approach through simulation using an analytic cost model. Comparison with the heuristic text query optimization approaches shows a maximum of 100 times faster query processing, and comparison with the dynamic programming approach shows exponentially faster query optimization (e.g., 600 times for a 10-keyword query).

Effective Reference Probability Incorporating the Effect of Expiration Time in Web Cache (웹 캐쉬에서 만기시간의 영향을 고려한 유효참조확률)

  • Lee, Jeong-Joon;Moon, Yang-Se;Whang, Kyu-Young;Hong, Eui-Kyung
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.688-701
    • /
    • 2001
  • Web caching has become an important problem addressing the performance issues in web applications. In this paper we propose a method that enhances the performance of web caching by incorporating the expiration time of web data we introduce the notion of the effective reference probability that incorporates the effect of expiration time into the reference probability used in the existing cache replacement algorithms .We formally define the effective reference probability and derive it theoretically using a probabilistic model. By simply replacing probabilities with the effective reference probability in the existing cache replacement algorithms we can take the effect of expiration time into account The results of performance evaluation through experiments show that the replacement algorithms using the effective reference probability always outperform the existing ones. The reason is that the proposed method precisely reflects the theoretical probability of getting the cache effect, and thus, incorporates the influence of the expiration time more effectively. In particular when the cache fraction is 0.05 and data update is comparatively frequent (i.e. the update frequency is more than 1/0 of the reference frequency) the performance enhancement is more than 30% in LRU-2 and 13% in Aggarwal's method (PSS integrating a refresh overhead factor) The results show that effective reference probability contributes significantly to the performance enhancement of the web cache in the presence of expiration time.

  • PDF

Detecting Surface Changes Triggered by Recent Volcanic Activities at Kīlauea, Hawai'i, by using the SAR Interferometric Technique: Preliminary Report (SAR 간섭기법을 활용한 하와이 킬라우에아 화산의 2018 분화 활동 관측)

  • Jo, MinJeong;Osmanoglu, Batuhan;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_4
    • /
    • pp.1545-1553
    • /
    • 2018
  • Recent eruptive activity at Kīlauea Volcano started on at the end of April in 2018 showed rapid ground deflation between May and June in 2018. On summit area Halema'uma'u lava lake continued to drop at high speed and Kīlauea's summit continued to deflate. GPS receivers and electronic tiltmeters detected the surface deformation greater than 2 meters. We explored the time-series surface deformation at Kīlauea Volcano, focusing on the early stage of eruptive activity, using multi-temporal COSMO-SkyMed SAR imagery. The observed maximum deformation in line-of-sight (LOS) direction was about -1.5 meter, and it indicates approximately -1.9 meter in subsiding direction by applying incidence angle. The results showed that summit began to deflate just after the event started and most of deformation occurred between early May and the end of June. Moreover, we confirmed that summit's deflation rarely happened since July 2018, which means volcanic activity entered a stable stage. The best-fit magma source model based on time-series surface deformation demonstrated that magma chambers were lying at depths between 2-3 km, and it showed a deepening trend in time. Along with the change of source depth, the center of each magma model moved toward the southwest according to the time. These results have a potential risk of including bias coming from single track observation. Therefore, to complement the initial results, we need to generate precise magma source model based on three-dimensional measurements in further research.

Analysis of the Status of Natural Language Processing Technology Based on Deep Learning (딥러닝 중심의 자연어 처리 기술 현황 분석)

  • Park, Sang-Un
    • The Journal of Bigdata
    • /
    • v.6 no.1
    • /
    • pp.63-81
    • /
    • 2021
  • The performance of natural language processing is rapidly improving due to the recent development and application of machine learning and deep learning technologies, and as a result, the field of application is expanding. In particular, as the demand for analysis on unstructured text data increases, interest in NLP(Natural Language Processing) is also increasing. However, due to the complexity and difficulty of the natural language preprocessing process and machine learning and deep learning theories, there are still high barriers to the use of natural language processing. In this paper, for an overall understanding of NLP, by examining the main fields of NLP that are currently being actively researched and the current state of major technologies centered on machine learning and deep learning, We want to provide a foundation to understand and utilize NLP more easily. Therefore, we investigated the change of NLP in AI(artificial intelligence) through the changes of the taxonomy of AI technology. The main areas of NLP which consists of language model, text classification, text generation, document summarization, question answering and machine translation were explained with state of the art deep learning models. In addition, major deep learning models utilized in NLP were explained, and data sets and evaluation measures for performance evaluation were summarized. We hope researchers who want to utilize NLP for various purposes in their field be able to understand the overall technical status and the main technologies of NLP through this paper.

Development of an Efficiency Calibration Model Optimization Method for Improving In-Situ Gamma-Ray Measurement for Non-Standard NORM Residues (비정형 공정부산물 In-Situ 감마선 측정 정확도 향상을 위한 효율교정 모델 최적화 방법 개발)

  • WooCheol Choi;Tae-Hoon Jeon;Jung-Ho Song;KwangPyo Kim
    • Journal of Radiation Industry
    • /
    • v.17 no.4
    • /
    • pp.471-479
    • /
    • 2023
  • In In-situ radioactivity measurement techniques, efficiency calibration models use predefined models to simulate a sample's geometry and radioactivity distribution. However, simplified efficiency calibration models lead to uncertainties in the efficiency curves, which in turn affect the radioactivity concentration results. This study aims to develop an efficiency calibration optimization methodology to improve the accuracy of in-situ gamma radiation measurements for byproducts from industrial facilities. To accomplish the objective, a drive mechanism for rotational measurement of an byproduct simulator and a sample was constructed. Using ISOCS, an efficiency calibration model of the designed object was generated. Then, the sensitivity analysis of the efficiency calibration model was performed, and the efficiency curve of the efficiency calibration model was optimized using the sensitivity analysis results. Finally, the radiation concentration of the simulated subject was estimated, compared, and evaluated with the designed certification value. For the sensitivity assessment of the influencing factors of the efficiency calibration model, the ISOCS Uncertainty Estimator was used for the horizontal and vertical size and density of the measured object. The standard deviation of the measurement efficiency as a function of the longitudinal size and density of the efficiency calibration model decreased with increasing energy region. When using the optimized efficiency calibration model, the measurement efficiency using IUE was improved compared to the measurement efficiency using ISOCS at the energy of 228Ac (911 keV) for the nuclide under analysis. Using the ISOCS efficiency calibration method, the difference between the measured radiation concentration and the design value for each simulated subject measurement direction was 4.1% (1% to 10%) on average. The difference between the estimated radioactivity concentration and the design value was 3.6% (1~8%) on average when using the ISOCS IUE efficiency calibration method, which was closer to the design value than the efficiency calibration method using ISOCS. In other words, the estimated radioactivity concentration using the optimized efficiency curve was similar to the designed radioactivity concentration. The results of this study can be utilized as the main basis for the development of regulatory technologies for the treatment and disposal of waste generated during the operation, maintenance, and facility replacement of domestic byproduct generation facilities.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.