• Title/Summary/Keyword: information space structure

Search Result 1,077, Processing Time 0.032 seconds

VKOSPI Forecasting and Option Trading Application Using SVM (SVM을 이용한 VKOSPI 일 중 변화 예측과 실제 옵션 매매에의 적용)

  • Ra, Yun Seon;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.177-192
    • /
    • 2016
  • Machine learning is a field of artificial intelligence. It refers to an area of computer science related to providing machines the ability to perform their own data analysis, decision making and forecasting. For example, one of the representative machine learning models is artificial neural network, which is a statistical learning algorithm inspired by the neural network structure of biology. In addition, there are other machine learning models such as decision tree model, naive bayes model and SVM(support vector machine) model. Among the machine learning models, we use SVM model in this study because it is mainly used for classification and regression analysis that fits well to our study. The core principle of SVM is to find a reasonable hyperplane that distinguishes different group in the data space. Given information about the data in any two groups, the SVM model judges to which group the new data belongs based on the hyperplane obtained from the given data set. Thus, the more the amount of meaningful data, the better the machine learning ability. In recent years, many financial experts have focused on machine learning, seeing the possibility of combining with machine learning and the financial field where vast amounts of financial data exist. Machine learning techniques have been proved to be powerful in describing the non-stationary and chaotic stock price dynamics. A lot of researches have been successfully conducted on forecasting of stock prices using machine learning algorithms. Recently, financial companies have begun to provide Robo-Advisor service, a compound word of Robot and Advisor, which can perform various financial tasks through advanced algorithms using rapidly changing huge amount of data. Robo-Adviser's main task is to advise the investors about the investor's personal investment propensity and to provide the service to manage the portfolio automatically. In this study, we propose a method of forecasting the Korean volatility index, VKOSPI, using the SVM model, which is one of the machine learning methods, and applying it to real option trading to increase the trading performance. VKOSPI is a measure of the future volatility of the KOSPI 200 index based on KOSPI 200 index option prices. VKOSPI is similar to the VIX index, which is based on S&P 500 option price in the United States. The Korea Exchange(KRX) calculates and announce the real-time VKOSPI index. VKOSPI is the same as the usual volatility and affects the option prices. The direction of VKOSPI and option prices show positive relation regardless of the option type (call and put options with various striking prices). If the volatility increases, all of the call and put option premium increases because the probability of the option's exercise possibility increases. The investor can know the rising value of the option price with respect to the volatility rising value in real time through Vega, a Black-Scholes's measurement index of an option's sensitivity to changes in the volatility. Therefore, accurate forecasting of VKOSPI movements is one of the important factors that can generate profit in option trading. In this study, we verified through real option data that the accurate forecast of VKOSPI is able to make a big profit in real option trading. To the best of our knowledge, there have been no studies on the idea of predicting the direction of VKOSPI based on machine learning and introducing the idea of applying it to actual option trading. In this study predicted daily VKOSPI changes through SVM model and then made intraday option strangle position, which gives profit as option prices reduce, only when VKOSPI is expected to decline during daytime. We analyzed the results and tested whether it is applicable to real option trading based on SVM's prediction. The results showed the prediction accuracy of VKOSPI was 57.83% on average, and the number of position entry times was 43.2 times, which is less than half of the benchmark (100 times). A small number of trading is an indicator of trading efficiency. In addition, the experiment proved that the trading performance was significantly higher than the benchmark.

A study on application of fractal structure on graphic design (그래픽 디자인에 있어서 프랙탈 구조의 활용 가능성 연구)

  • Moon, Chul
    • Archives of design research
    • /
    • v.17 no.1
    • /
    • pp.211-220
    • /
    • 2004
  • The Chaos theory of complexity and Fractal theory which became a prominent figure as a new paradigm of natural science should be understood not as whole, and not into separate elements of nature. Fractal Dimensions are used to measure the complexity of objects. We now have ways of measuring things that were traditionally meaningless or impossible to measure. They are capable of describing many irregularly shaped objects including man and nature. It is compatible method of application to express complexity of nature in the dimension of non-fixed number by placing our point of view to lean toward non-linear, diverse, endless time, and complexity when we look at our world. Fractal Dimension allows us to measure the complexity of an object. Having a wide application of fractal geometry and Chaos theory to the art field is the territory of imagination where art and science encounter each other and yet there has not been much research in this area. The formative word has been extracted in this study by analyzing objective data to grasp formative principle and geometric characteristic of (this)distinct figures of Fractals. With this form of research, it is not so much about fractal in mathematics, but the concept of self-similarity and recursiveness, randomness, devices expressed from unspeakable space, and the formative similarity to graphic design are focused in this study. The fractal figures have characteristics in which the structure doesn't change the nature of things of the figure even in the process if repeated infinitely many times, the limit of the process produces is fractal. Almost all fractals are at least partially self-similar. This means that a part of the fractal is identical to the entire fractal itself even if there is an enlargement to infinitesimal. This means any part has all the information to recompose as whole. Based on this scene, the research is intended to examine possibility of analysis of fractals in geometric characteristics in plasticity toward forms in graphic design. As a result, a beautiful proportion appears in graphic design with calculation of mathematic. It should be an appropriate equation to express nature since the fractal dimension allows us to measure the complexity of an object and the Fractla geometry should pick out high addition in value of peculiarity and characteristics in the complex of art and science. At the stage where the necessity of accepting this demand and adapting ourselves to the change is gathering strength is very significant in this research.

  • PDF

Open Skies Policy : A Study on the Alliance Performance and International Competition of FFP (항공자유화정책상 상용고객우대제도의 제휴성과와 국제경쟁에 관한 연구)

  • Suh, Myung-Sun;Cho, Ju-Eun
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.25 no.2
    • /
    • pp.139-162
    • /
    • 2010
  • In terms of the international air transport, the open skies policy implies freedom in the sky or opening the sky. In the normative respect, the open skies policy is a kind of open-door policy which gives various forms of traffic right to other countries, but on the other hand it is a policy of free competition in the international air transport. Since the Airline Deregulation Act of 1978, the United States has signed an open skies agreement with many countries, starting with the Netherlands, so that competitive large airlines can compete in the international air transport market where there exist a lot of business opportunities. South Korea now has an open skies agreement with more than 20 countries. The frequent flyer program (FFP) is part of a broad-based marketing alliance which has been used as an airfare strategy since the U.S. government's airline deregulation. The membership-based program is an incentive plan that provides mileage points to customers for using airline services and rewards customer loyalty in tangible forms based on their accumulated points. In its early stages, the frequent flyer program was focused on marketing efforts to attract customers, but now in the environment of intense competition among airlines, the program is used as an important strategic marketing tool for enhancing business performance. Therefore, airline companies agree that they need to identify customer needs in order to secure loyal customers more effectively. The outcomes from an airline's frequent flyer program can have a variety of effects on international competition. First, the airline can obtain a more dominant position in the air flight market by expanding its air route networks. Second, the availability of flight products for customers can be improved with an increase in flight frequency. Third, the airline can preferentially expand into new markets and thus gain advantages over its competitors. However, there are few empirical studies on the airline frequent flyer program. Accordingly, this study aims to explore the effects of the program on international competition, after reviewing the types of strategic alliance between airlines. Making strategic airline alliances is a worldwide trend resulting from the open skies policy. South Korea also needs to be making open skies agreements more realistic to promote the growth and competition of domestic airlines. The present study is about the performance of the airline frequent flyer program and international competition under the open skies policy. With a sample of five global alliance groups (Star, Oneworld, Wings, Qualiflyer and Skyteam), the study was attempted as an empirical study of the effects that the resource structures and levels of information technology held by airlines in each group have on the type of alliance, and one-way analysis of variance and regression analysis were used to test hypotheses. The findings of this study suggest that both large airline companies and small/medium-size airlines in an alliance group with global networks and organizations are able to achieve high performance and secure international competitiveness. Airline passengers earn mileage points by using non-flight services through an alliance network with hotels, car-rental services, duty-free shops, travel agents and more and show high interests in and preferences for related service benefits. Therefore, Korean airline companies should develop more aggressive marketing programs based on multilateral alliances with other services including hotels, as well as with other airlines.

  • PDF

The aplication of fuzzy classification methods to spatial analysis (공간분석을 위한 퍼지분류의 이론적 배경과 적용에 관한 연구 - 경상남도 邑級以上 도시의 기능분류를 중심으로 -)

  • ;Jung, In-Chul
    • Journal of the Korean Geographical Society
    • /
    • v.30 no.3
    • /
    • pp.296-310
    • /
    • 1995
  • Classification of spatial units into meaningful sets is an important procedure in spatial analysis. It is crucial in characterizing and identifying spatial structures. But traditional classification methods such as cluster analysis require an exact database and impose a clear-cut boundary between classes. Scrutiny of realistic classification problems, however, reveals that available infermation may be vague and that the boundary may be ambiguous. The weakness of conventional methods is that they fail to capture the fuzzy data and the transition between classes. Fuzzy subsets theory is useful for solving these problems. This paper aims to come to the understanding of theoretical foundations of fuzzy spatial analysis, and to find the characteristics of fuzzy classification methods. It attempts to do so through the literature review and the case study of urban classification of the Cities and Eups of Kyung-Nam Province. The main findings are summarized as follows: 1. Following Dubois and Prade, fuzzy information has an imprecise and/or uncertain evaluation. In geography, fuzzy informations about spatial organization, geographical space perception and human behavior are frequent. But the researcher limits his work to numerical data processing and he does not consider spatial fringe. Fuzzy spatial analysis makes it possible to include the interface of groups in classification. 2. Fuzzy numerical taxonomic method is settled by Deloche, Tranquis, Ponsard and Leung. Depending on the data and the method employed, groups derived may be mutually exclusive or they may overlap to a certain degree. Classification pattern can be derived for each degree of similarity/distance $\alpha$. By takina the values of $\alpha$ in ascending or descending order, the hierarchical classification is obtained. 3. Kyung-Nam Cities and Eups were classified by fuzzy discrete classification, fuzzy conjoint classification and cluster analysis according to the ratio of number of persons employed in industries. As a result, they were divided into several groups which had homogeneous characteristies. Fuzzy discrete classification and cluste-analysis give clear-cut boundary, but fuzzy conjoint classification delimit the edges and cores of urban classification. 4. The results of different methods are varied. But each method contributes to the revealing the transparence of spatial structure. Through the result of three kinds of classification, Chung-mu city which has special characteristics and the group of Industrial cities composed by Changwon, Ulsan, Masan, Chinhai, Kimhai, Yangsan, Ungsang, Changsungpo and Shinhyun are evident in common. Even though the appraisal of the fuzzy classification methods, this framework appears to be more realistic and flexible in preserving information pertinent to urban classification.

  • PDF

Manganese and Iron Interaction: a Mechanism of Manganese-Induced Parkinsonism

  • Zheng, Wei
    • Proceedings of the Korea Environmental Mutagen Society Conference
    • /
    • 2003.10a
    • /
    • pp.34-63
    • /
    • 2003
  • Occupational and environmental exposure to manganese continue to represent a realistic public health problem in both developed and developing countries. Increased utility of MMT as a replacement for lead in gasoline creates a new source of environmental exposure to manganese. It is, therefore, imperative that further attention be directed at molecular neurotoxicology of manganese. A Need for a more complete understanding of manganese functions both in health and disease, and for a better defined role of manganese in iron metabolism is well substantiated. The in-depth studies in this area should provide novel information on the potential public health risk associated with manganese exposure. It will also explore novel mechanism(s) of manganese-induced neurotoxicity from the angle of Mn-Fe interaction at both systemic and cellular levels. More importantly, the result of these studies will offer clues to the etiology of IPD and its associated abnormal iron and energy metabolism. To achieve these goals, however, a number of outstanding questions remain to be resolved. First, one must understand what species of manganese in the biological matrices plays critical role in the induction of neurotoxicity, Mn(II) or Mn(III)? In our own studies with aconitase, Cpx-I, and Cpx-II, manganese was added to the buffers as the divalent salt, i.e., $MnCl_2$. While it is quite reasonable to suggest that the effect on aconitase and/or Cpx-I activites was associated with the divalent species of manganese, the experimental design does not preclude the possibility that a manganese species of higher oxidation state, such as Mn(III), is required for the induction of these effects. The ionic radius of Mn(III) is 65 ppm, which is similar to the ionic size to Fe(III) (65 ppm at the high spin state) in aconitase (Nieboer and Fletcher, 1996; Sneed et al., 1953). Thus it is plausible that the higher oxidation state of manganese optimally fits into the geometric space of aconitase, serving as the active species in this enzymatic reaction. In the current literature, most of the studies on manganese toxicity have used Mn(II) as $MnCl_2$ rather than Mn(III). The obvious advantage of Mn(II) is its good water solubility, which allows effortless preparation in either in vivo or in vitro investigation, whereas almost all of the Mn(III) salt products on the comparison between two valent manganese species nearly infeasible. Thus a more intimate collaboration with physiochemists to develop a better way to study Mn(III) species in biological matrices is pressingly needed. Second, In spite of the special affinity of manganese for mitochondria and its similar chemical properties to iron, there is a sound reason to postulate that manganese may act as an iron surrogate in certain iron-requiring enzymes. It is, therefore, imperative to design the physiochemical studies to determine whether manganese can indeed exchange with iron in proteins, and to understand how manganese interacts with tertiary structure of proteins. The studies on binding properties (such as affinity constant, dissociation parameter, etc.) of manganese and iron to key enzymes associated with iron and energy regulation would add additional information to our knowledge of Mn-Fe neurotoxicity. Third, manganese exposure, either in vivo or in vitro, promotes cellular overload of iron. It is still unclear, however, how exactly manganese interacts with cellular iron regulatory processes and what is the mechanism underlying this cellular iron overload. As discussed above, the binding of IRP-I to TfR mRNA leads to the expression of TfR, thereby increasing cellular iron uptake. The sequence encoding TfR mRNA, in particular IRE fragments, has been well-documented in literature. It is therefore possible to use molecular technique to elaborate whether manganese cytotoxicity influences the mRNA expression of iron regulatory proteins and how manganese exposure alters the binding activity of IPRs to TfR mRNA. Finally, the current manganese investigation has largely focused on the issues ranging from disposition/toxicity study to the characterization of clinical symptoms. Much less has been done regarding the risk assessment of environmenta/occupational exposure. One of the unsolved, pressing puzzles is the lack of reliable biomarker(s) for manganese-induced neurologic lesions in long-term, low-level exposure situation. Lack of such a diagnostic means renders it impossible to assess the human health risk and long-term social impact associated with potentially elevated manganese in environment. The biochemical interaction between manganese and iron, particularly the ensuing subtle changes of certain relevant proteins, provides the opportunity to identify and develop such a specific biomarker for manganese-induced neuronal damage. By learning the molecular mechanism of cytotoxicity, one will be able to find a better way for prediction and treatment of manganese-initiated neurodegenerative diseases.

  • PDF

Hierarchical Overlapping Clustering to Detect Complex Concepts (중복을 허용한 계층적 클러스터링에 의한 복합 개념 탐지 방법)

  • Hong, Su-Jeong;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.111-125
    • /
    • 2011
  • Clustering is a process of grouping similar or relevant documents into a cluster and assigning a meaningful concept to the cluster. By this process, clustering facilitates fast and correct search for the relevant documents by narrowing down the range of searching only to the collection of documents belonging to related clusters. For effective clustering, techniques are required for identifying similar documents and grouping them into a cluster, and discovering a concept that is most relevant to the cluster. One of the problems often appearing in this context is the detection of a complex concept that overlaps with several simple concepts at the same hierarchical level. Previous clustering methods were unable to identify and represent a complex concept that belongs to several different clusters at the same level in the concept hierarchy, and also could not validate the semantic hierarchical relationship between a complex concept and each of simple concepts. In order to solve these problems, this paper proposes a new clustering method that identifies and represents complex concepts efficiently. We developed the Hierarchical Overlapping Clustering (HOC) algorithm that modified the traditional Agglomerative Hierarchical Clustering algorithm to allow overlapped clusters at the same level in the concept hierarchy. The HOC algorithm represents the clustering result not by a tree but by a lattice to detect complex concepts. We developed a system that employs the HOC algorithm to carry out the goal of complex concept detection. This system operates in three phases; 1) the preprocessing of documents, 2) the clustering using the HOC algorithm, and 3) the validation of semantic hierarchical relationships among the concepts in the lattice obtained as a result of clustering. The preprocessing phase represents the documents as x-y coordinate values in a 2-dimensional space by considering the weights of terms appearing in the documents. First, it goes through some refinement process by applying stopwords removal and stemming to extract index terms. Then, each index term is assigned a TF-IDF weight value and the x-y coordinate value for each document is determined by combining the TF-IDF values of the terms in it. The clustering phase uses the HOC algorithm in which the similarity between the documents is calculated by applying the Euclidean distance method. Initially, a cluster is generated for each document by grouping those documents that are closest to it. Then, the distance between any two clusters is measured, grouping the closest clusters as a new cluster. This process is repeated until the root cluster is generated. In the validation phase, the feature selection method is applied to validate the appropriateness of the cluster concepts built by the HOC algorithm to see if they have meaningful hierarchical relationships. Feature selection is a method of extracting key features from a document by identifying and assigning weight values to important and representative terms in the document. In order to correctly select key features, a method is needed to determine how each term contributes to the class of the document. Among several methods achieving this goal, this paper adopted the $x^2$�� statistics, which measures the dependency degree of a term t to a class c, and represents the relationship between t and c by a numerical value. To demonstrate the effectiveness of the HOC algorithm, a series of performance evaluation is carried out by using a well-known Reuter-21578 news collection. The result of performance evaluation showed that the HOC algorithm greatly contributes to detecting and producing complex concepts by generating the concept hierarchy in a lattice structure.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

A prognosis discovering lethal-related genes in plants for target identification and inhibitor design (식물 치사관련 유전자를 이용하는 신규 제초제 작용점 탐색 및 조절물질 개발동향)

  • Hwang, I.T.;Lee, D.H.;Choi, J.S.;Kim, T.J.;Kim, B.T.;Park, Y.S.;Cho, K.Y.
    • The Korean Journal of Pesticide Science
    • /
    • v.5 no.3
    • /
    • pp.1-11
    • /
    • 2001
  • New technologies will have a large impact on the discovery of new herbicide site of action. Genomics, combinatorial chemistry, and bioinformatics help take advantage of serendipity through tile sequencing of huge numbers of genes or the synthesis of large numbers of chemical compounds. There are approximately $10^{30}\;to\;10^{50}$ possible molecules in molecular space of which only a fraction have been synthesized. Combining this potential with having access to 50,000 plant genes in the future elevates tile probability of discovering flew herbicidal site of actions. If 0.1, 1.0 or 10% of total genes in a typical plant are valid for herbicide target, a plant with 50,000 genes would provide about 50, 500, and 5,000 targets, respectively. However, only 11 herbicide targets have been identified and commercialized. The successful design of novel herbicides depends on careful consideration of a number of factors including target enzyme selections and validations, inhibitor designs, and the metabolic fates. Biochemical information can be used to identify enzymes which produce lethal phenotypes. The identification of a lethal target site is an important step to this approach. An examination of the characteristics of known targets provides of crucial insight as to the definition of a lethal target. Recently, antisense RNA suppression of an enzyme translation has been used to determine the genes required for toxicity and offers a strategy for identifying lethal target sites. After the identification of a lethal target, detailed knowledge such as the enzyme kinetics and the protein structure may be used to design potent inhibitors. Various types of inhibitors may be designed for a given enzyme. Strategies for the selection of new enzyme targets giving the desired physiological response upon partial inhibition include identification of chemical leads, lethal mutants and the use of antisense technology. Enzyme inhibitors having agrochemical utility can be categorized into six major groups: ground-state analogues, group specific reagents, affinity labels, suicide substrates, reaction intermediate analogues, and extraneous site inhibitors. In this review, examples of each category, and their advantages and disadvantages, will be discussed. The target identification and construction of a potent inhibitor, in itself, may not lead to develop an effective herbicide. The desired in vivo activity, uptake and translocation, and metabolism of the inhibitor should be studied in detail to assess the full potential of the target. Strategies for delivery of the compound to the target enzyme and avoidance of premature detoxification may include a proherbicidal approach, especially when inhibitors are highly charged or when selective detoxification or activation can be exploited. Utilization of differences in detoxification or activation between weeds and crops may lead to enhance selectivity. Without a full appreciation of each of these facets of herbicide design, the chances for success with the target or enzyme-driven approach are reduced.

  • PDF

Current status of Brassica A genome analysis (Brassica A genome의 최근 연구 동향)

  • Choi, Su-Ryun;Kwon, Soo-Jin
    • Journal of Plant Biotechnology
    • /
    • v.39 no.1
    • /
    • pp.33-48
    • /
    • 2012
  • As a scientific curiosity to understand the structure and the function of crops and experimental efforts to apply it to plant breeding, genetic maps have been constructed in various crops. Especially, in the case of Brassica crop, genetic mapping has been accelerated since genetic information of model plant $Arabidopsis$ was available. As a result, the whole $B.$ $rapa$ genome (A genome) sequencing has recently been done. The genome sequences offer opportunities to develop molecular markers for genetic analysis in $Brassica$ crops. RFLP markers are widely used as the basis for genetic map construction, but detection system is inefficiency. The technical efficiency and analysis speed of the PCR-based markers become more preferable for many form of $Brassica$ genome study. The massive sequence informative markers such as SSR, SNP and InDels are also available to increase the density of markers for high-resolution genetic analysis. The high density maps are invaluable resources for QTLs analysis, marker assisted selection (MAS), map-based cloning and comparative analysis within $Brassica$ as well as related crop species. Additionally, the advents of new technology, next-generation technique, have served as a momentum for molecular breeding. Here we summarize genetic and genomic resources and suggest their applications for the molecular breeding in $Brassica$ crop.

Study of the ENC reduction for mobile platform (모바일 플랫폼을 위한 전자해도 소형화 연구)

  • 심우성;박재민;서상현
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2003.05a
    • /
    • pp.181-186
    • /
    • 2003
  • The satellite navigation system is widely used for identifying a user's position regardless of weather or geographic conditions and also make effect on new technology of marine LBS(Location Based Service), which has the technology of geographic information such as the ENC. Generally, there are conceivable systems of marine LBS such as ECDIS, or ECS that use the ENC itself with powerful processor in installed type on ships bridge. Since the ENC is relatively heavy structure with dummy format for data transfer between different systems, we should reduce the ENC to small and compact size in order to use it in mobile platform. In this paper, we assumed that the mobile system like PDA, or Webpad can be used for small capability of mobile platform. However, the ENC should be updated periodically by update profile data produced by HO. If we would reduce the ENC without a consideration of update, we could not get newly updated data furthermore. As summary, we studied considerations for ENC reduction with update capability. It will make the ENC be useful in many mobile platforms for various applications.

  • PDF