• Title/Summary/Keyword: 정보 이용 효율성

Search Result 7,514, Processing Time 0.045 seconds

Frequency of Spontaneous Polyploids in Monoembryonic Jeju Native Citrus Species and Some Mandarin Cultivars (단배성 제주 재래귤 및 만다린잡종에서 자연 발생적인 배수체의 발생 빈도)

  • Chae, Chi-Won;Yun, Su-Hyun;Park, Jae-Ho;Kim, Min-Ju;Koh, Sang-Wook;Song, Kwan-Jeong;Lee, Dong-Hun
    • Journal of Life Science
    • /
    • v.22 no.7
    • /
    • pp.871-879
    • /
    • 2012
  • Polyploids are a potentially important germplasm source in seedless citrus breeding program. Seedlessness is one of the most promising traits of commercial mandarin breeds that mandarin triploid hybrids possess permanently. The formation of new constant triploid hybrids can be recovered through diploid species hybridization from the fusion of divalent gametes at low frequencyor intra-and inter-ploidy crosses. However, extensive breeding work based on small $F_1$ hybrid seeds developed is impossible without a very effective aseptic methodology and ploidy event. In this study, in vitro embryo culture was employed to recover natural hybrids from monoembryonic diploid, open-pollinated mandarin. Flow cytometry was used to determine ploidy level. A total of 10,289 seeds were extracted from 792 fruits having approximately 13 seeds per fruit. Average frequency of small seeds developed was 7.1%, while the average frequency of small seeds per fruit were: 8.9% for 'Clementine' 10.2% for 'Harehime' 2.6% for 'Kamja' 3.1% for 'Pyunkyool' 2.8% for 'Sadookam' and 7.0% for 'Wilking' mandarin. Average size of a perfect seed was $49.52{\pm}0.07mm^2$ ('Clementine') while the small seed measured $7.95{\pm}0.04mm^2$ ('Clementine'), which was about 1/6 smaller than the perfect seed. In total, 731 small seeds were obtained and all of them contained only one embryo per seed. The efficiency of 'Clementine' was 14 times higher than 'Wilking' and more than 109 times higher than 'Pyunkyool'. The basic information on spontaneous polyploidy provides for the hybridization of constant triploids and increases the efficiency of conventional cross.

Effect of Capital Market Return On Insurance Coverage : A Financial Economic Approach (투자수익(投資收益)이 보험수요(保險需要)에 미치는 영향(影響)에 관한 이론적(理論的) 고찰(考察))

  • Hong, Soon-Koo
    • The Korean Journal of Financial Management
    • /
    • v.10 no.1
    • /
    • pp.249-280
    • /
    • 1993
  • Recent financial theory views insurance policies as financial instruments that are traded in markets and whose prices reflect the forces of supply and demand. This article analyzes individual's insurance purchasing behavior along with capital market investment activities, which will provide a more realistic look at the tradeoff between insurance and investment in the individual's budget constraint. It is shown that the financial economic concept of insurance cost should reflect the opportunity cost of insurance premium. The author demonstrates the importance of riskless and risky financial assets in reaching an equilibrium insurance premium. In addition, the paper also investigates how the investment income could affect the four established theorems on traditional insurance literature. At the present time in Korea, the price deregulation is being debated as the most important current issue in insurance industry. In view of the results of this paper, insurance companies should recognize investment income in pricing their coverage if insurance prices are deregulated. Otherwise. price competition may force insurance companies to restrict coverage or to leave the market.

  • PDF

QTL Mapping for 6-Year-Old Growths of a Single Open-Pollinated Half-Sib Family of a Selected Clone 7-1037 in Loblolly Pine(Pinus taeda) and Average Effect of QTL Allele Substitution (테다소나무 7-1037 클론의 단일 반형매 풍매가계 6년생 생장에 대한 QTL mapping과 QTL 대립유전자 치환의 평균효과)

  • Kim, Yong-Yul;Lee, Bong-Choon;O'Malley, David M.
    • Journal of Korean Society of Forest Science
    • /
    • v.95 no.4
    • /
    • pp.483-494
    • /
    • 2006
  • We conducted QTL mapping for 6-year growths of open-pollinated half-sib progenies from a selected clone 7-1037 in Pinus taeda. With an AFLP marker analysis on haploid DNA samples from the megagametophytes of the open-pollinated seeds, we constructed 20 framework maps spanning a total of 1,869 cM in total length and 18.5 cM in an average interval length between markers. Composite interval mapping reveals that one QTL explains 5.9% of the total phenotypic variation of height, and three QTLs account for 3.9~5.6% of the variation of diameter at breast height (DBH). There are no correlations between the QTLs. The genetic effects of the QTLs are 39.6 cm in height and 7.20~9.41 mm in DBH, respectively, The average effects of gene substitution of the markers closely linked with the QTLs are 44.3 cm in height and 8.38~11.81 m in DBH. Under an assumption that the within-family heritability for the growth traits of loblolly pine is less than 0.2, the QTLs account for 26.8% of the additive genetic variance of the progenies. In terms of relative selection efficiency, the individual selection based on QTL markers could be 5 times as high as phenotypic selection. The results in this study indicate that the QTL mapping method with open-pollinated half-sib family could be more practical and applicable to the conventional seed orchard-based selection work than other mapping methods with a single full-sib family, in particular from the viewpoint that it can provide crucial information for within-family individual selection such as breeding value.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

EU's Space Code of Conduct: Right Step Forward (EU의 우주행동강령의 의미와 평가)

  • Park, Won-Hwa
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.27 no.2
    • /
    • pp.211-241
    • /
    • 2012
  • The Draft International Code of Conduct for Outer Space Activities officially proposed by the European Union on the occasion of the 55th Session of the United Nations Peaceful Uses of the Outer Space last June 2012 in Vienna, Austria is to fill the lacunae of the relevant norms to be applied to the human activities in the outer space and thus has the merit our attention. The missing elements of the norms span from the prohibition of an arms race, safety and security of the space objects including the measures to reduce the space debris to the exchange of information of space activities among space-faring nations. The EU's initiatives, when implemented, cover or will eventually prepare for the forum to deal with such issues of interests of the international community. The EU's initiatives begun at the end of 2008 included the unofficial contacts with major space powers including in particular the USA of which position is believed to have been reflected in the Draft with the aim to have it adopted in 2013. Although the Code is made up of soft law rather than hard law for the subscribing countries, the USA seems to be afraid of the eventuality whereby its strategic advantages in the outer space will be affected by the prohibiting norms, possibly to be pursued by the Code from its current non-binding character, of placing weapons in the outer space. It is with this trepidation that the USA has been opposing to the adoption of the United Nations Assembly Resolutions on the prevention of an arms race in the outer space (PAROS) and in the same context to the setting-up of a working group on the arms race in the outer space in the frame of the Conference on Disarmament. China and Russia who together put forward a draft Treaty on Prevention of the Placement of Weapons in Outer Space and of the Threat or Use of Force against Outer Space Objects (PPWT) in 2008 would not feel comfortable either because the EU initiatives will steal the lime light. Consequently their reactions are understandably passive towards the Draft Code while the reaction of the USA to the PPWT was a clear cut "No". With the above background, the future of the EU Code is uncertain. Nevertheless, the purpose of the Code to reduce the space debris, to allow exchange of the information on the space activities, and to protect the space objects through safety and security, all to maximize the principle of the peaceful use and exploration of the outer space is the laudable efforts on the part of EU. When the detailed negotiations will be held, some problems including the cost to be incurred by setting up an office for the clerical works could be discussed for both efficient and economic mechanism. For example, the new clerical works envisaged in the Draft Code could be discharged by the current UN OOSA (Office for Outer Space Affairs) with minimal additional resources. The EU's initiatives are another meaningful contribution following one due to it in adopting the Kyoto Protocol of 1997 to the UNFCCC (UN Framework Convention on the Climate Change) and deserve the praise from the thoughtful international community.

  • PDF

Red Tide Detection through Image Fusion of GOCI and Landsat OLI (GOCI와 Landsat OLI 영상 융합을 통한 적조 탐지)

  • Shin, Jisun;Kim, Keunyong;Min, Jee-Eun;Ryu, Joo-Hyung
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.2_2
    • /
    • pp.377-391
    • /
    • 2018
  • In order to efficiently monitor red tide over a wide range, the need for red tide detection using remote sensing is increasing. However, the previous studies focus on the development of red tide detection algorithm for ocean colour sensor. In this study, we propose the use of multi-sensor to improve the inaccuracy for red tide detection and remote sensing data in coastal areas with high turbidity, which are pointed out as limitations of satellite-based red tide monitoring. The study area were selected based on the red tide information provided by National Institute of Fisheries Science, and spatial fusion and spectral-based fusion were attempted using GOCI image as ocean colour sensor and Landsat OLI image as terrestrial sensor. Through spatial fusion of the two images, both the red tide of the coastal area and the outer sea areas, where the quality of Landsat OLI image was low, which were impossible to observe in GOCI images, showed improved detection results. As a result of spectral-based fusion performed by feature-level and rawdata-level, there was no significant difference in red tide distribution patterns derived from the two methods. However, in the feature-level method, the red tide area tends to overestimated as spatial resolution of the image low. As a result of pixel segmentation by linear spectral unmixing method, the difference in the red tide area was found to increase as the number of pixels with low red tide ratio increased. For rawdata-level, Gram-Schmidt sharpening method estimated a somewhat larger area than PC spectral sharpening method, but no significant difference was observed. In this study, it is shown that coastal red tide with high turbidity as well as outer sea areas can be detected through spatial fusion of ocean colour and terrestrial sensor. Also, by presenting various spectral-based fusion methods, more accurate red tide area estimation method is suggested. It is expected that this result will provide more precise detection of red tide around the Korean peninsula and accurate red tide area information needed to determine countermeasure to effectively control red tide.

A practical analysis approach to the functional requirements standards for electronic records management system (기록관리시스템 기능요건 표준의 실무적 해석)

  • Yim, Jin-Hee
    • The Korean Journal of Archival Studies
    • /
    • no.18
    • /
    • pp.139-178
    • /
    • 2008
  • The functional requirements standards for electronic records management systems which have been published recently describe the specifications very precisely including not only core functions of records management but also the function of system management and optional modules. The fact that these functional requirements standards seem to be similar to each other in terms of the content of functions described in the standards is linked to the global standardization trends in the practical area of electronic records. In addition, these functional requirements standards which have been built upon with collaboration of archivists from many national archives, IT specialists, consultants and records management applications vendors result in not only obtaining high quality but also establishing the condition that the standards could be the certificate criteria easily. Though there might be a lot of different ways and approaches to benchmark the functional requirements standards developed from advanced electronic records management practice, this paper is showing the possibility and meaningful business cases of gaining useful practical ideas learned from imaging electronic records management practices related to the functional requirements standards. The business cases are explored central functions of records management and the intellectual control of the records such as classification scheme or disposal schedules. The first example is related to the classification scheme. Should the records classification be fixed at same number of level? Should a record item be filed only at the last node of classification scheme? The second example addresses a precise disposition schedule which is able to impose the event-driven chronological retention period to records and which could be operated using a inheritance concept between the parent nodes and child nodes in classification scheme. The third example shows the usage of the function which holds or freeze and release the records required to keep as evidence to comply with compliance like e-Discovery or the risk management of organizations under the premise that the records management should be the basis for the legal compliance. The last case shows some cases for bulk batch operation required if the records manager can use the ERMS as their useful tool. It is needed that the records managers are able to understand and interpret the specifications of functional requirements standards for ERMS in the practical view point, and to review the standards and extract required specifications for upgrading their own ERMS. The National Archives of Korea should provide various stakeholders with a sound basis for them to implement effective and efficient electronic records management practices through expanding the usage scope of the functional requirements standard for ERMS and making the common understanding about its implications.

A Study on Evaluating the Possibility of Monitoring Ships of CAS500-1 Images Based on YOLO Algorithm: A Case Study of a Busan New Port and an Oakland Port in California (YOLO 알고리즘 기반 국토위성영상의 선박 모니터링 가능성 평가 연구: 부산 신항과 캘리포니아 오클랜드항을 대상으로)

  • Park, Sangchul;Park, Yeongbin;Jang, Soyeong;Kim, Tae-Ho
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1463-1478
    • /
    • 2022
  • Maritime transport accounts for 99.7% of the exports and imports of the Republic of Korea; therefore, developing a vessel monitoring system for efficient operation is of significant interest. Several studies have focused on tracking and monitoring vessel movements based on automatic identification system (AIS) data; however, ships without AIS have limited monitoring and tracking ability. High-resolution optical satellite images can provide the missing layer of information in AIS-based monitoring systems because they can identify non-AIS vessels and small ships over a wide range. Therefore, it is necessary to investigate vessel monitoring and small vessel classification systems using high-resolution optical satellite images. This study examined the possibility of developing ship monitoring systems using Compact Advanced Satellite 500-1 (CAS500-1) satellite images by first training a deep learning model using satellite image data and then performing detection in other images. To determine the effectiveness of the proposed method, the learning data was acquired from ships in the Yellow Sea and its major ports, and the detection model was established using the You Only Look Once (YOLO) algorithm. The ship detection performance was evaluated for a domestic and an international port. The results obtained using the detection model in ships in the anchorage and berth areas were compared with the ship classification information obtained using AIS, and an accuracy of 85.5% and 70% was achieved using domestic and international classification models, respectively. The results indicate that high-resolution satellite images can be used in mooring ships for vessel monitoring. The developed approach can potentially be used in vessel tracking and monitoring systems at major ports around the world if the accuracy of the detection model is improved through continuous learning data construction.

Analysis of the impact of mathematics education research using explainable AI (설명가능한 인공지능을 활용한 수학교육 연구의 영향력 분석)

  • Oh, Se Jun
    • The Mathematical Education
    • /
    • v.62 no.3
    • /
    • pp.435-455
    • /
    • 2023
  • This study primarily focused on the development of an Explainable Artificial Intelligence (XAI) model to discern and analyze papers with significant impact in the field of mathematics education. To achieve this, meta-information from 29 domestic and international mathematics education journals was utilized to construct a comprehensive academic research network in mathematics education. This academic network was built by integrating five sub-networks: 'paper and its citation network', 'paper and author network', 'paper and journal network', 'co-authorship network', and 'author and affiliation network'. The Random Forest machine learning model was employed to evaluate the impact of individual papers within the mathematics education research network. The SHAP, an XAI model, was used to analyze the reasons behind the AI's assessment of impactful papers. Key features identified for determining impactful papers in the field of mathematics education through the XAI included 'paper network PageRank', 'changes in citations per paper', 'total citations', 'changes in the author's h-index', and 'citations per paper of the journal'. It became evident that papers, authors, and journals play significant roles when evaluating individual papers. When analyzing and comparing domestic and international mathematics education research, variations in these discernment patterns were observed. Notably, the significance of 'co-authorship network PageRank' was emphasized in domestic mathematics education research. The XAI model proposed in this study serves as a tool for determining the impact of papers using AI, providing researchers with strategic direction when writing papers. For instance, expanding the paper network, presenting at academic conferences, and activating the author network through co-authorship were identified as major elements enhancing the impact of a paper. Based on these findings, researchers can have a clear understanding of how their work is perceived and evaluated in academia and identify the key factors influencing these evaluations. This study offers a novel approach to evaluating the impact of mathematics education papers using an explainable AI model, traditionally a process that consumed significant time and resources. This approach not only presents a new paradigm that can be applied to evaluations in various academic fields beyond mathematics education but also is expected to substantially enhance the efficiency and effectiveness of research activities.

A Polarization-based Frequency Scanning Interferometer and the Measurement Processing Acceleration based on Parallel Programing (편광 기반 주파수 스캐닝 간섭 시스템 및 병렬 프로그래밍 기반 측정 고속화)

  • Lee, Seung Hyun;Kim, Min Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.253-263
    • /
    • 2013
  • Frequency Scanning Interferometry(FSI) system, one of the most promising optical surface measurement techniques, generally results in superior optical performance comparing with other 3-dimensional measuring methods as its hardware structure is fixed in operation and only the light frequency is scanned in a specific spectral band without vertical scanning of the target surface or the objective lens. FSI system collects a set of images of interference fringe by changing the frequency of light source. After that, it transforms intensity data of acquired image into frequency information, and calculates the height profile of target objects with the help of frequency analysis based on Fast Fourier Transform(FFT). However, it still suffers from optical noise on target surfaces and relatively long processing time due to the number of images acquired in frequency scanning phase. 1) a Polarization-based Frequency Scanning Interferometry(PFSI) is proposed for optical noise robustness. It consists of tunable laser for light source, ${\lambda}/4$ plate in front of reference mirror, ${\lambda}/4$ plate in front of target object, polarizing beam splitter, polarizer in front of image sensor, polarizer in front of the fiber coupled light source, ${\lambda}/2$ plate between PBS and polarizer of the light source. Using the proposed system, we can solve the problem of fringe image with low contrast by using polarization technique. Also, we can control light distribution of object beam and reference beam. 2) the signal processing acceleration method is proposed for PFSI, based on parallel processing architecture, which consists of parallel processing hardware and software such as Graphic Processing Unit(GPU) and Compute Unified Device Architecture(CUDA). As a result, the processing time reaches into tact time level of real-time processing. Finally, the proposed system is evaluated in terms of accuracy and processing speed through a series of experiment and the obtained results show the effectiveness of the proposed system and method.