• Title/Summary/Keyword: Problem instance space

Search Result 25, Processing Time 0.024 seconds

A Study on Enhanced 3PAKE Scheme against Password Guessing Attack in Smart Home Environment (스마트홈 환경에서 패스워드 추측 공격에 안전한 개선된 3PAKE 기법에 대한 연구)

  • Lee, Dae-Hwi;Lee, Im-Yeong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.26 no.6
    • /
    • pp.1471-1481
    • /
    • 2016
  • As concern about IoT is increasing recently, various IoT services are being launched. Smart home is closely related to our daily life by combining IoT with user's residential space. Therefore, if an unauthorized user accesses a device inside a Smart home, it can cause more serious damage to user as it is related with daily lives. For instance executing the command allowing unauthenticated access for the internal locking device can be a real harm to user's property like a home invasion. To prevent this problem, this paper introduces 3PAKE Techniques, which provides authenticated Key exchange through Home gateway using Password-based Authenticated Key Exchange(PAKE).

Plasma Etching Process based on Real-time Monitoring of Radical Density and Substrate Temperature

  • Takeda, K.;Fukunaga, Y.;Tsutsumi, T.;Ishikawa, K.;Kondo, H.;Sekine, M.;Hori, M.
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2016.02a
    • /
    • pp.93-93
    • /
    • 2016
  • Large scale integrated circuits (LSIs) has been improved by the shrinkage of the circuit dimensions. The smaller chip sizes and increase in circuit density require the miniaturization of the line-width and space between metal interconnections. Therefore, an extreme precise control of the critical dimension and pattern profile is necessary to fabricate next generation nano-electronics devices. The pattern profile control of plasma etching with an accuracy of sub-nanometer must be achieved. To realize the etching process which achieves the problem, understanding of the etching mechanism and precise control of the process based on the real-time monitoring of internal plasma parameters such as etching species density, surface temperature of substrate, etc. are very important. For instance, it is known that the etched profiles of organic low dielectric (low-k) films are sensitive to the substrate temperature and density ratio of H and N atoms in the H2/N2 plasma [1]. In this study, we introduced a feedback control of actual substrate temperature and radical density ratio monitored in real time. And then the dependence of etch rates and profiles of organic films have been evaluated based on the substrate temperatures. In this study, organic low-k films were etched by a dual frequency capacitively coupled plasma employing the mixture of H2/N2 gases. A 100-MHz power was supplied to an upper electrode for plasma generation. The Si substrate was electrostatically chucked to a lower electrode biased by supplying a 2-MHz power. To investigate the effects of H and N radical on the etching profile of organic low-k films, absolute H and N atom densities were measured by vacuum ultraviolet absorption spectroscopy [2]. Moreover, using the optical fiber-type low-coherence interferometer [3], substrate temperature has been measured in real time during etching process. From the measurement results, the temperature raised rapidly just after plasma ignition and was gradually saturated. The temporal change of substrate temperature is a crucial issue to control of surface reactions of reactive species. Therefore, by the intervals of on-off of the plasma discharge, the substrate temperature was maintained within ${\pm}1.5^{\circ}C$ from the set value. As a result, the temperatures were kept within $3^{\circ}C$ during the etching process. Then, we etched organic films with line-and-space pattern using this system. The cross-sections of the organic films etched for 50 s with the substrate temperatures at $20^{\circ}C$ and $100^{\circ}C$ were observed by SEM. From the results, they were different in the sidewall profile. It suggests that the reactions on the sidewalls changed according to the substrate temperature. The precise substrate temperature control method with real-time temperature monitoring and intermittent plasma generation was suggested to contribute on realization of fine pattern etching.

  • PDF

A Study on Multi-Object Data Split Technique for Deep Learning Model Efficiency (딥러닝 효율화를 위한 다중 객체 데이터 분할 학습 기법)

  • Jong-Ho Na;Jun-Ho Gong;Hyu-Soung Shin;Il-Dong Yun
    • Tunnel and Underground Space
    • /
    • v.34 no.3
    • /
    • pp.218-230
    • /
    • 2024
  • Recently, many studies have been conducted for safety management in construction sites by incorporating computer vision. Anchor box parameters are used in state-of-the-art deep learning-based object detection and segmentation, and the optimized parameters are critical in the training process to ensure consistent accuracy. Those parameters are generally tuned by fixing the shape and size by the user's heuristic method, and a single parameter controls the training rate in the model. However, the anchor box parameters are sensitive depending on the type of object and the size of the object, and as the number of training data increases. There is a limit to reflecting all the characteristics of the training data with a single parameter. Therefore, this paper suggests a method of applying multiple parameters optimized through data split to solve the above-mentioned problem. Criteria for efficiently segmenting integrated training data according to object size, number of objects, and shape of objects were established, and the effectiveness of the proposed data split method was verified through a comparative study of conventional scheme and proposed methods.

A New Item Recommendation Procedure Using Preference Boundary

  • Kim, Hyea-Kyeong;Jang, Moon-Kyoung;Kim, Jae-Kyeong;Cho, Yoon-Ho
    • Asia pacific journal of information systems
    • /
    • v.20 no.1
    • /
    • pp.81-99
    • /
    • 2010
  • Lately, in consumers' markets the number of new items is rapidly increasing at an overwhelming rate while consumers have limited access to information about those new products in making a sensible, well-informed purchase. Therefore, item providers and customers need a system which recommends right items to right customers. Also, whenever new items are released, for instance, the recommender system specializing in new items can help item providers locate and identify potential customers. Currently, new items are being added to an existing system without being specially noted to consumers, making it difficult for consumers to identify and evaluate new products introduced in the markets. Most of previous approaches for recommender systems have to rely on the usage history of customers. For new items, this content-based (CB) approach is simply not available for the system to recommend those new items to potential consumers. Although collaborative filtering (CF) approach is not directly applicable to solve the new item problem, it would be a good idea to use the basic principle of CF which identifies similar customers, i,e. neighbors, and recommend items to those customers who have liked the similar items in the past. This research aims to suggest a hybrid recommendation procedure based on the preference boundary of target customer. We suggest the hybrid recommendation procedure using the preference boundary in the feature space for recommending new items only. The basic principle is that if a new item belongs within the preference boundary of a target customer, then it is evaluated to be preferred by the customer. Customers' preferences and characteristics of items including new items are represented in a feature space, and the scope or boundary of the target customer's preference is extended to those of neighbors'. The new item recommendation procedure consists of three steps. The first step is analyzing the profile of items, which are represented as k-dimensional feature values. The second step is to determine the representative point of the target customer's preference boundary, the centroid, based on a personal information set. To determine the centroid of preference boundary of a target customer, three algorithms are developed in this research: one is using the centroid of a target customer only (TC), the other is using centroid of a (dummy) big target customer that is composed of a target customer and his/her neighbors (BC), and another is using centroids of a target customer and his/her neighbors (NC). The third step is to determine the range of the preference boundary, the radius. The suggested algorithm Is using the average distance (AD) between the centroid and all purchased items. We test whether the CF-based approach to determine the centroid of the preference boundary improves the recommendation quality or not. For this purpose, we develop two hybrid algorithms, BC and NC, which use neighbors when deciding centroid of the preference boundary. To test the validity of hybrid algorithms, BC and NC, we developed CB-algorithm, TC, which uses target customers only. We measured effectiveness scores of suggested algorithms and compared them through a series of experiments with a set of real mobile image transaction data. We spilt the period between 1st June 2004 and 31st July and the period between 1st August and 31st August 2004 as a training set and a test set, respectively. The training set Is used to make the preference boundary, and the test set is used to evaluate the performance of the suggested hybrid recommendation procedure. The main aim of this research Is to compare the hybrid recommendation algorithm with the CB algorithm. To evaluate the performance of each algorithm, we compare the purchased new item list in test period with the recommended item list which is recommended by suggested algorithms. So we employ the evaluation metric to hit the ratio for evaluating our algorithms. The hit ratio is defined as the ratio of the hit set size to the recommended set size. The hit set size means the number of success of recommendations in our experiment, and the test set size means the number of purchased items during the test period. Experimental test result shows the hit ratio of BC and NC is bigger than that of TC. This means using neighbors Is more effective to recommend new items. That is hybrid algorithm using CF is more effective when recommending to consumers new items than the algorithm using only CB. The reason of the smaller hit ratio of BC than that of NC is that BC is defined as a dummy or virtual customer who purchased all items of target customers' and neighbors'. That is centroid of BC often shifts from that of TC, so it tends to reflect skewed characters of target customer. So the recommendation algorithm using NC shows the best hit ratio, because NC has sufficient information about target customers and their neighbors without damaging the information about the target customers.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.