• Title/Summary/Keyword: specified domain

Search Result 158, Processing Time 0.024 seconds

An Adaptive Multi-Level Thresholding and Dynamic Matching Unit Selection for IC Package Marking Inspection (IC 패키지 마킹검사를 위한 적응적 다단계 이진화와 정합단위의 동적 선택)

  • Kim, Min-Ki
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.245-254
    • /
    • 2002
  • IC package marking inspection system using machine vision locates and identifies the target elements from input image, and decides the quality of marking by comparing the extracted target elements with the standard patterns. This paper proposes an adaptive multi-level thresholding (AMLT) method which is suitable for a series of operations such as locating the target IC package, extracting the characters, and detecting the Pinl dimple. It also proposes a dynamic matching unit selection (DMUS) method which is robust to noises as well as effective to catch out the local marking errors. The main idea of the AMLT method is to restrict the inputs of Otsu's thresholding algorithm within a specified area and a partial range of gray values. Doing so, it can adapt to the specific domain. The DMUS method dynamically selects the matching unit according to the result of character extraction and layout analysis. Therefore, in spite of the various erroneous situation occurred in the process of character extraction and layout analysis, it can select minimal matching unit in any environment. In an experiment with 280 IC package images of eight types, the correct extracting rate of IC package and Pinl dimple was 100% and the correct decision rate of marking quality was 98.8%. This result shows that the proposed methods are effective to IC package marking inspection.

Effect of the Nonlinearity of the Soft Soil on the Elastic and Inelastic Seismic Response Spectra (연약지반의 비선형성이 탄성 및 비탄성 지진응답스펙트럼에 미치는 영향)

  • Kim, Yong-Seok
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.9 no.4 s.44
    • /
    • pp.11-18
    • /
    • 2005
  • Inelastic seismic analysis is necessary for the seismic design due to the nonlinear behavior of a structure-soil system, and the importance of the performance based design considering the soil-structure interaction is recognized for the reasonable seismic design. In this study, elastic and inelastic seismic response analyses of a single degree of freedom system on the soft soil layer were peformed considering the nonlinearity of the soil for the 11 weak or moderate, and 5 strong earthquakes scaled to the nominal peak acceleration of 0.075g, 0.15g, 0.2g and 0.3g. Seismic response analyses for the structure-soil system were peformed in one step applying the earthquake motions to the bedrock In the frequency domain, using a pseudo 3-D dynamic analysis software. Study results indicate that it is necessary to consider the nonlinear soil-structure interaction effects and to perform the performance based seismic design for the various soil layers rather than to follow the routine procedures specified in the seismic design codes. Nonlinearity of the soft soil excited with the weak earthquakes also affected significantly to the elastic and inelastic responses due to the nonlinear soil amplification of the earthquake motions, and it was pronounced especially for the elastic ones.

Image Warping Using Vector Field Based Deformation and Its Application to Texture Mapping (벡터장 기반 변형기술을 이용한 이미지 와핑 방법 : 텍스쳐 매핑에의 응용을 중심으로)

  • Seo, Hye-Won;Cordier, Frederic
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.5
    • /
    • pp.404-411
    • /
    • 2009
  • We introduce in this paper a new method for smooth foldover-free warping of images, based on the vector field deformation technique proposed by Von Funck et al. It allows users to specify the constraints in two different ways: positional constraints to constrain the position of a point in the image and gradient constraints to constrain the orientation and scaling of some parts of the image. From the user-specified constraints, it computes in the image domain a C1-continuous velocity vector field, along which each pixel progressively moves from its original position to the target. The target positions of the pixels are obtained by solving a set of partial derivative equations with the 4th order Runge-Kutta method. We show how our method can be useful for texture mapping with hard constraints. We start with an unconstrained planar embedding of a target mesh using a previously known method (Least Squares Conformal Map). Then, in order to obtain a texture map that satisfies the given constraints, we use the proposed warping method to align the features of the texture image with those on the unconstrained embedding. Compared to previous work, our method generates a smoother texture mapping, offers higher level of control for defining the constraints, and is simpler to implement.

Multi-day Trip Planning System with Collaborative Recommendation (협업적 추천 기반의 여행 계획 시스템)

  • Aprilia, Priska;Oh, Kyeong-Jin;Hong, Myung-Duk;Ga, Myeong-Hyeon;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.159-185
    • /
    • 2016
  • Planning a multi-day trip is a complex, yet time-consuming task. It usually starts with selecting a list of points of interest (POIs) worth visiting and then arranging them into an itinerary, taking into consideration various constraints and preferences. When choosing POIs to visit, one might ask friends to suggest them, search for information on the Web, or seek advice from travel agents; however, those options have their limitations. First, the knowledge of friends is limited to the places they have visited. Second, the tourism information on the internet may be vast, but at the same time, might cause one to invest a lot of time reading and filtering the information. Lastly, travel agents might be biased towards providers of certain travel products when suggesting itineraries. In recent years, many researchers have tried to deal with the huge amount of tourism information available on the internet. They explored the wisdom of the crowd through overwhelming images shared by people on social media sites. Furthermore, trip planning problems are usually formulated as 'Tourist Trip Design Problems', and are solved using various search algorithms with heuristics. Various recommendation systems with various techniques have been set up to cope with the overwhelming tourism information available on the internet. Prediction models of recommendation systems are typically built using a large dataset. However, sometimes such a dataset is not always available. For other models, especially those that require input from people, human computation has emerged as a powerful and inexpensive approach. This study proposes CYTRIP (Crowdsource Your TRIP), a multi-day trip itinerary planning system that draws on the collective intelligence of contributors in recommending POIs. In order to enable the crowd to collaboratively recommend POIs to users, CYTRIP provides a shared workspace. In the shared workspace, the crowd can recommend as many POIs to as many requesters as they can, and they can also vote on the POIs recommended by other people when they find them interesting. In CYTRIP, anyone can make a contribution by recommending POIs to requesters based on requesters' specified preferences. CYTRIP takes input on the recommended POIs to build a multi-day trip itinerary taking into account the user's preferences, the various time constraints, and the locations. The input then becomes a multi-day trip planning problem that is formulated in Planning Domain Definition Language 3 (PDDL3). A sequence of actions formulated in a domain file is used to achieve the goals in the planning problem, which are the recommended POIs to be visited. The multi-day trip planning problem is a highly constrained problem. Sometimes, it is not feasible to visit all the recommended POIs with the limited resources available, such as the time the user can spend. In order to cope with an unachievable goal that can result in no solution for the other goals, CYTRIP selects a set of feasible POIs prior to the planning process. The planning problem is created for the selected POIs and fed into the planner. The solution returned by the planner is then parsed into a multi-day trip itinerary and displayed to the user on a map. The proposed system is implemented as a web-based application built using PHP on a CodeIgniter Web Framework. In order to evaluate the proposed system, an online experiment was conducted. From the online experiment, results show that with the help of the contributors, CYTRIP can plan and generate a multi-day trip itinerary that is tailored to the users' preferences and bound by their constraints, such as location or time constraints. The contributors also find that CYTRIP is a useful tool for collecting POIs from the crowd and planning a multi-day trip.

Recommending Core and Connecting Keywords of Research Area Using Social Network and Data Mining Techniques (소셜 네트워크와 데이터 마이닝 기법을 활용한 학문 분야 중심 및 융합 키워드 추천 서비스)

  • Cho, In-Dong;Kim, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.127-138
    • /
    • 2011
  • The core service of most research portal sites is providing relevant research papers to various researchers that match their research interests. This kind of service may only be effective and easy to use when a user can provide correct and concrete information about a paper such as the title, authors, and keywords. However, unfortunately, most users of this service are not acquainted with concrete bibliographic information. It implies that most users inevitably experience repeated trial and error attempts of keyword-based search. Especially, retrieving a relevant research paper is more difficult when a user is novice in the research domain and does not know appropriate keywords. In this case, a user should perform iterative searches as follows : i) perform an initial search with an arbitrary keyword, ii) acquire related keywords from the retrieved papers, and iii) perform another search again with the acquired keywords. This usage pattern implies that the level of service quality and user satisfaction of a portal site are strongly affected by the level of keyword management and searching mechanism. To overcome this kind of inefficiency, some leading research portal sites adopt the association rule mining-based keyword recommendation service that is similar to the product recommendation of online shopping malls. However, keyword recommendation only based on association analysis has limitation that it can show only a simple and direct relationship between two keywords. In other words, the association analysis itself is unable to present the complex relationships among many keywords in some adjacent research areas. To overcome this limitation, we propose the hybrid approach for establishing association network among keywords used in research papers. The keyword association network can be established by the following phases : i) a set of keywords specified in a certain paper are regarded as co-purchased items, ii) perform association analysis for the keywords and extract frequent patterns of keywords that satisfy predefined thresholds of confidence, support, and lift, and iii) schematize the frequent keyword patterns as a network to show the core keywords of each research area and connecting keywords among two or more research areas. To estimate the practical application of our approach, we performed a simple experiment with 600 keywords. The keywords are extracted from 131 research papers published in five prominent Korean journals in 2009. In the experiment, we used the SAS Enterprise Miner for association analysis and the R software for social network analysis. As the final outcome, we presented a network diagram and a cluster dendrogram for the keyword association network. We summarized the results in Section 4 of this paper. The main contribution of our proposed approach can be found in the following aspects : i) the keyword network can provide an initial roadmap of a research area to researchers who are novice in the domain, ii) a researcher can grasp the distribution of many keywords neighboring to a certain keyword, and iii) researchers can get some idea for converging different research areas by observing connecting keywords in the keyword association network. Further studies should include the following. First, the current version of our approach does not implement a standard meta-dictionary. For practical use, homonyms, synonyms, and multilingual problems should be resolved with a standard meta-dictionary. Additionally, more clear guidelines for clustering research areas and defining core and connecting keywords should be provided. Finally, intensive experiments not only on Korean research papers but also on international papers should be performed in further studies.

The Carboxyl-terminal Tail of a Heterotrimeric Kinesin 2 Motor Subunit Directly Binds to β2-tubulin (Heterotrimeric Kinesin 2 모터 단백질의 Carboxyl-말단과 β2-tubulin의 결합)

  • Jeong, Young Joo;Park, Sung Woo;Kim, Sang-Jin;Lee, Won Hee;Kim, Mooseong;Urm, Sang-Hwa;Seog, Dae-Hyun
    • Journal of Life Science
    • /
    • v.29 no.3
    • /
    • pp.369-375
    • /
    • 2019
  • Microtubules form through the polymerization of ${\alpha}-$ and ${\beta}-tubulin$, and tubulin transport plays an important role in defining the rate of microtubule growth inside cellular appendages, such as the cilia and flagella. Heterotrimeric kinesin 2 is a molecular motor member of the kinesin superfamily (KIF) that moves along the microtubules to transport multiple cargoes. It consists of two motor subunits (KIF3A and KIF3B) and a kinesin-associated protein 3 (KAP3), forming a heterotrimeric complex. Heterotrimeric kinesin 2 interacts with many different binding proteins through the cargo-binding domains of the KIF3s, but these binding proteins have not yet been specified. To identify these proteins for KIF3A, we performed yeast two-hybrid (Y2H) screening and found a specific interaction with ${\beta}2-tubulin$ (Tubb2), a microtubule component. Tubb2 was found to bind to the cargo-binding domain of KIF3A but did not interact with KIF3B, KIF5B, or kinesin light chain 1 in the Y2H assay. The carboxyl-terminal region of Tubb2 is essential for interaction with KIF3A. Other Tubb isoforms, including Tubb1, Tubb3, Tubb4, and Tubb5, also interacted with KIF3A in the Y2H screening. However, ${\alpha}1-tubulin$ (Tuba1) did not interact with KIF3A. In addition, an antibody to KIF3A specifically co-immunoprecipitated the KIF3B and KAP3 associated with Tubb2 from mouse brain extracts. In combination, these results suggest that a heterotrimeric kinesin 2 motor protein is capable of binding to tubulin and may transport it in cells.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

Comparison of Association Rule Learning and Subgroup Discovery for Mining Traffic Accident Data (교통사고 데이터의 마이닝을 위한 연관규칙 학습기법과 서브그룹 발견기법의 비교)

  • Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.1-16
    • /
    • 2015
  • Traffic accident is one of the major cause of death worldwide for the last several decades. According to the statistics of world health organization, approximately 1.24 million deaths occurred on the world's roads in 2010. In order to reduce future traffic accident, multipronged approaches have been adopted including traffic regulations, injury-reducing technologies, driving training program and so on. Records on traffic accidents are generated and maintained for this purpose. To make these records meaningful and effective, it is necessary to analyze relationship between traffic accident and related factors including vehicle design, road design, weather, driver behavior etc. Insight derived from these analysis can be used for accident prevention approaches. Traffic accident data mining is an activity to find useful knowledges about such relationship that is not well-known and user may interested in it. Many studies about mining accident data have been reported over the past two decades. Most of studies mainly focused on predict risk of accident using accident related factors. Supervised learning methods like decision tree, logistic regression, k-nearest neighbor, neural network are used for these prediction. However, derived prediction model from these algorithms are too complex to understand for human itself because the main purpose of these algorithms are prediction, not explanation of the data. Some of studies use unsupervised clustering algorithm to dividing the data into several groups, but derived group itself is still not easy to understand for human, so it is necessary to do some additional analytic works. Rule based learning methods are adequate when we want to derive comprehensive form of knowledge about the target domain. It derives a set of if-then rules that represent relationship between the target feature with other features. Rules are fairly easy for human to understand its meaning therefore it can help provide insight and comprehensible results for human. Association rule learning methods and subgroup discovery methods are representing rule based learning methods for descriptive task. These two algorithms have been used in a wide range of area from transaction analysis, accident data analysis, detection of statistically significant patient risk groups, discovering key person in social communities and so on. We use both the association rule learning method and the subgroup discovery method to discover useful patterns from a traffic accident dataset consisting of many features including profile of driver, location of accident, types of accident, information of vehicle, violation of regulation and so on. The association rule learning method, which is one of the unsupervised learning methods, searches for frequent item sets from the data and translates them into rules. In contrast, the subgroup discovery method is a kind of supervised learning method that discovers rules of user specified concepts satisfying certain degree of generality and unusualness. Depending on what aspect of the data we are focusing our attention to, we may combine different multiple relevant features of interest to make a synthetic target feature, and give it to the rule learning algorithms. After a set of rules is derived, some postprocessing steps are taken to make the ruleset more compact and easier to understand by removing some uninteresting or redundant rules. We conducted a set of experiments of mining our traffic accident data in both unsupervised mode and supervised mode for comparison of these rule based learning algorithms. Experiments with the traffic accident data reveals that the association rule learning, in its pure unsupervised mode, can discover some hidden relationship among the features. Under supervised learning setting with combinatorial target feature, however, the subgroup discovery method finds good rules much more easily than the association rule learning method that requires a lot of efforts to tune the parameters.