• Title/Summary/Keyword: 설계알고리즘

Search Result 7,297, Processing Time 0.036 seconds

AN ORBIT PROPAGATION SOFTWARE FOR MARS ORBITING SPACECRAFT (화성 근접 탐사를 위한 우주선의 궤도전파 소프트웨어)

  • Song, Young-Joo;Park, Eun-Seo;Yoo, Sung-Moon;Park, Sang-Young;Choi, Kyu-Hong;Yoon, Jae-Cheol;Yim, Jo-Ryeong;Kim, Han-Dol;Choi, Jun-Min;Kim, Hak-Jung;Kim, Byung-Kyo
    • Journal of Astronomy and Space Sciences
    • /
    • v.21 no.4
    • /
    • pp.351-360
    • /
    • 2004
  • An orbit propagation software for the Mars orbiting spacecraft has been developed and verified in preparations for the future Korean Mars missions. Dynamic model for Mars orbiting spacecraft has been studied, and Mars centered coordinate systems are utilized to express spacecraft state vectors. Coordinate corrections to the Mars centered coordinate system have been made to adjust the effects caused by Mars precession and nutation. After spacecraft enters Sphere of Influence (SOI) of the Mars, the spacecraft experiences various perturbation effects as it approaches to Mars. Every possible perturbation effect is considered during integrations of spacecraft state vectors. The Mars50c gravity field model and the Mars-GRAM 2001 model are used to compute perturbation effects due to Mars gravity field and Mars atmospheric drag, respectively. To compute exact locations of other planets, JPL's DE405 ephemerides are used. Phobos and Deimos's ephemeris are computed using analytical method because their informations are not released with DE405. Mars Global Surveyor's mapping orbital data are used to verify the developed propagator performances. After one Martian day propagation (12 orbital periods), the results show about maximum ${\pm}5$ meter errors, in every position state components(radial, cross-track and along-track), when compared to these from the Astrogator propagation in the Satellite Tool Kit. This result shows high reliability of the developed software which can be used to design near Mars missions for Korea, in future.

Ontology-based User Customized Search Service Considering User Intention (온톨로지 기반의 사용자 의도를 고려한 맞춤형 검색 서비스)

  • Kim, Sukyoung;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.129-143
    • /
    • 2012
  • Recently, the rapid progress of a number of standardized web technologies and the proliferation of web users in the world bring an explosive increase of producing and consuming information documents on the web. In addition, most companies have produced, shared, and managed a huge number of information documents that are needed to perform their businesses. They also have discretionally raked, stored and managed a number of web documents published on the web for their business. Along with this increase of information documents that should be managed in the companies, the need of a solution to locate information documents more accurately among a huge number of information sources have increased. In order to satisfy the need of accurate search, the market size of search engine solution market is becoming increasingly expended. The most important functionality among much functionality provided by search engine is to locate accurate information documents from a huge information sources. The major metric to evaluate the accuracy of search engine is relevance that consists of two measures, precision and recall. Precision is thought of as a measure of exactness, that is, what percentage of information considered as true answer are actually such, whereas recall is a measure of completeness, that is, what percentage of true answer are retrieved as such. These two measures can be used differently according to the applied domain. If we need to exhaustively search information such as patent documents and research papers, it is better to increase the recall. On the other hand, when the amount of information is small scale, it is better to increase precision. Most of existing web search engines typically uses a keyword search method that returns web documents including keywords which correspond to search words entered by a user. This method has a virtue of locating all web documents quickly, even though many search words are inputted. However, this method has a fundamental imitation of not considering search intention of a user, thereby retrieving irrelevant results as well as relevant ones. Thus, it takes additional time and effort to set relevant ones out from all results returned by a search engine. That is, keyword search method can increase recall, while it is difficult to locate web documents which a user actually want to find because it does not provide a means of understanding the intention of a user and reflecting it to a progress of searching information. Thus, this research suggests a new method of combining ontology-based search solution with core search functionalities provided by existing search engine solutions. The method enables a search engine to provide optimal search results by inferenceing the search intention of a user. To that end, we build an ontology which contains concepts and relationships among them in a specific domain. The ontology is used to inference synonyms of a set of search keywords inputted by a user, thereby making the search intention of the user reflected into the progress of searching information more actively compared to existing search engines. Based on the proposed method we implement a prototype search system and test the system in the patent domain where we experiment on searching relevant documents associated with a patent. The experiment shows that our system increases the both recall and precision in accuracy and augments the search productivity by using improved user interface that enables a user to interact with our search system effectively. In the future research, we will study a means of validating the better performance of our prototype system by comparing other search engine solution and will extend the applied domain into other domains for searching information such as portal.

Analysis of the Characteristics of the Seismic source and the Wave Propagation Parameters in the region of the Southeastern Korean Peninsula (한반도 남동부 지진의 지각매질 특성 및 지진원 특성 변수 연구)

  • Kim, Jun-Kyoung;Kang, Ik-Bum
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.2 no.1 s.4
    • /
    • pp.135-141
    • /
    • 2002
  • Both non-linear damping values of the deep and shallow crustal materials and seismic source parameters are found from the observed near-field seismic ground motions at the South-eastern Korean Peninsula. The non-linear numerical algorithm applied in this study is Levenberg-Marquadet method. All the 25 sets of horizontal ground motions (east-west and north-south components at each seismic station) from 3 events (micro to macro scale) were used for the analysis of damping values and source parameters. The non-linear damping values of the deep and shallow crustal materials were found to be more similar to those of the region of the Western United States. The seismic source parameters found from this study also showed that the resultant stress drop values are relatively low compared to those of the Western United Sates. Consequently, comparisons of the various seismic parameters from this study and those of the United States Seismo-tectonic data suggest that the seismo-tectonic characteristics of the South eastern Korean Peninsula is more similar to those of the Western U.S.

Development of a Small Gamma Camera Using NaI(T1)-Position Sensitive Photomultiplier Tube for Breast Imaging (NaI (T1) 섬광결정과 위치민감형 광전자증배관을 이용한 유방암 진단용 소형 감마카메라 개발)

  • Kim, Jong-Ho;Choi, Yong;Kwon, Hong-Seong;Kim, Hee-Joung;Kim, Sang-Eun;Choe, Yearn-Seong;Lee, Kyung-Han;Kim, Moon-Hae;Joo, Koan-Sik;Kim, Byuug-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.32 no.4
    • /
    • pp.365-373
    • /
    • 1998
  • Purpose: The conventional gamma camera is not ideal for scintimammography because of its large detector size (${\sim}500mm$ in width) causing high cost and low image quality. We are developing a small gamma camera dedicated for breast imaging. Materials and Methods: The small gamma camera system consists of a NaI (T1) crystal ($60 mm{\times}60 mm{\times}6 mm$) coupled with a Hamamatsu R3941 Position Sensitive Photomultiplier Tube (PSPMT), a resister chain circuit, preamplifiers, nuclear instrument modules, an analog to digital converter and a personal computer for control and display. The PSPMT was read out using a standard resistive charge division which multiplexes the 34 cross wire anode channels into 4 signals ($X^+,\;X^-,\;Y^+,\;Y^-$). Those signals were individually amplified by four preamplifiers and then, shaped and amplified by amplifiers. The signals were discriminated ana digitized via triggering signal and used to localize the position of an event by applying the Anger logic. Results: The intrinsic sensitivity of the system was approximately 8,000 counts/sec/${\mu}Ci$. High quality flood and hole mask images were obtained. Breast phantom containing $2{\sim}7 mm$ diameter spheres was successfully imaged with a parallel hole collimator The image displayed accurate size and activity distribution over the imaging field of view Conclusion: We have succesfully developed a small gamma camera using NaI(T1)-PSPMT and nuclear Instrument modules. The small gamma camera developed in this study might improve the diagnostic accuracy of scintimammography by optimally imaging the breast.

  • PDF

Social Tagging-based Recommendation Platform for Patented Technology Transfer (특허의 기술이전 활성화를 위한 소셜 태깅기반 지적재산권 추천플랫폼)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.53-77
    • /
    • 2015
  • Korea has witnessed an increasing number of domestic patent applications, but a majority of them are not utilized to their maximum potential but end up becoming obsolete. According to the 2012 National Congress' Inspection of Administration, about 73% of patents possessed by universities and public-funded research institutions failed to lead to creating social values, but remain latent. One of the main problem of this issue is that patent creators such as individual researcher, university, or research institution lack abilities to commercialize their patents into viable businesses with those enterprises that are in need of them. Also, for enterprises side, it is hard to find the appropriate patents by searching keywords on all such occasions. This system proposes a patent recommendation system that can identify and recommend intellectual rights appropriate to users' interested fields among a rapidly accumulating number of patent assets in a more easy and efficient manner. The proposed system extracts core contents and technology sectors from the existing pool of patents, and combines it with secondary social knowledge, which derives from tags information created by users, in order to find the best patents recommended for users. That is to say, in an early stage where there is no accumulated tag information, the recommendation is done by utilizing content characteristics, which are identified through an analysis of key words contained in such parameters as 'Title of Invention' and 'Claim' among the various patent attributes. In order to do this, the suggested system extracts only nouns from patents and assigns a weight to each noun according to the importance of it in all patents by performing TF-IDF analysis. After that, it finds patents which have similar weights with preferred patents by a user. In this paper, this similarity is called a "Domain Similarity". Next, the suggested system extract technology sector's characteristics from patent document by analyzing the international technology classification code (International Patent Classification, IPC). Every patents have more than one IPC, and each user can attach more than one tag to the patents they like. Thus, each user has a set of IPC codes included in tagged patents. The suggested system manages this IPC set to analyze technology preference of each user and find the well-fitted patents for them. In order to do this, the suggeted system calcuates a 'Technology_Similarity' between a set of IPC codes and IPC codes contained in all other patents. After that, when the tag information of multiple users are accumulated, the system expands the recommendations in consideration of other users' social tag information relating to the patent that is tagged by a concerned user. The similarity between tag information of perferred 'patents by user and other patents are called a 'Social Simialrity' in this paper. Lastly, a 'Total Similarity' are calculated by adding these three differenent similarites and patents having the highest 'Total Similarity' are recommended to each user. The suggested system are applied to a total of 1,638 korean patents obtained from the Korea Industrial Property Rights Information Service (KIPRIS) run by the Korea Intellectual Property Office. However, since this original dataset does not include tag information, we create virtual tag information and utilized this to construct the semi-virtual dataset. The proposed recommendation algorithm was implemented with JAVA, a computer programming language, and a prototype graphic user interface was also designed for this study. As the proposed system did not have dependent variables and uses virtual data, it is impossible to verify the recommendation system with a statistical method. Therefore, the study uses a scenario test method to verify the operational feasibility and recommendation effectiveness of the system. The results of this study are expected to improve the possibility of matching promising patents with the best suitable businesses. It is assumed that users' experiential knowledge can be accumulated, managed, and utilized in the As-Is patent system, which currently only manages standardized patent information.

Development of Convertor supporting Multi-languages for Mobile Network (무선전용 다중 언어의 번역을 지원하는 변환기의 구현)

  • Choe, Ji-Won;Kim, Gi-Cheon
    • The KIPS Transactions:PartC
    • /
    • v.9C no.2
    • /
    • pp.293-296
    • /
    • 2002
  • UP Link is One of the commercial product which converts HTML to HDML convertor in order to show the internet www contents in the mobile environments. When UP browser accesses HTML pages, the agent in the UP Link controls the converter to change the HTML to the HDML, I-Mode, which is developed by NTT-Docomo of Japan, has many contents through the long and stable commercial service. Micro Explorer, which is developed by Stinger project, also has many additional function. In this paper, we designed and implemented WAP convertor which can accept C-HTML contents and mHTML contents. C-HTML format by I-Mode is a subset of HTML format, mHTML format by ME is similar to C-HTML, So the content provides can easily develop C-HTML contents compared with WAP and the other case. Since C-HTML, mHTML and WML are used under the mobile environment, the limited transmission capacity of one page is also similar. In order to make a match table. After that, we apply conversion algorithm on it. If we can not find matched element, we arrange some tags which only can be supported by WML to display in the best shape. By the result, we can convert over 90% contents.

Development of the Risk Evaluation Model for Rear End Collision on the Basis of Microscopic Driving Behaviors (미시적 주행행태를 반영한 후미추돌위험 평가모형 개발)

  • Chung, Sung-Bong;Song, Ki-Han;Park, Chang-Ho;Chon, Kyung-Soo;Kho, Seung-Young
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.6
    • /
    • pp.133-144
    • /
    • 2004
  • A model and a measure which can evaluate the risk of rear end collision are developed. Most traffic accidents involve multiple causes such as the human factor, the vehicle factor, and the highway element at any given time. Thus, these factors should be considered in analyzing the risk of an accident and in developing safety models. Although most risky situations and accidents on the roads result from the poor response of a driver to various stimuli, many researchers have modeled the risk or accident by analyzing only the stimuli without considering the response of a driver. Hence, the reliabilities of those models turned out to be low. Thus in developing the model behaviors of a driver, such as reaction time and deceleration rate, are considered. In the past, most studies tried to analyze the relationships between a risk and an accident directly but they, due to the difficulty of finding out the directional relationships between these factors, developed a model by considering these factors, developed a model by considering indirect factors such as volume, speed, etc. However, if the relationships between risk and accidents are looked into in detail, it can be seen that they are linked by the behaviors of a driver, and depending on drivers the risk as it is on the road-vehicle system may be ignored or call drivers' attention. Therefore, an accident depends on how a driver handles risk, so that the more related risk to and accident occurrence is not the risk itself but the risk responded by a driver. Thus, in this study, the behaviors of a driver are considered in the model and to reflect these behaviors three concepts related to accidents are introduced. And safe stopping distance and accident occurrence probability were used for better understanding and for more reliable modeling of the risk. The index which can represent the risk is also developed based on measures used in evaluating noise level, and for the risk comparison between various situations, the equivalent risk level, considering the intensity and duration time, is developed by means of the weighted average. Validation is performed with field surveys on the expressway of Seoul, and the test vehicle was made to collect the traffic flow data, such as deceleration rate, speed and spacing. Based on this data, the risk by section, lane and traffic flow conditions are evaluated and compared with the accident data and traffic conditions. The evaluated risk level corresponds closely to the patterns of actual traffic conditions and counts of accident. The model and the method developed in this study can be applied to various fields, such as safety test of traffic flow, establishment of operation & management strategy for reliable traffic flow, and the safety test for the control algorithm in the advanced safety vehicles and many others.

A Study of a Non-commercial 3D Planning System, Plunc for Clinical Applicability (비 상업용 3차원 치료계획시스템인 Plunc의 임상적용 가능성에 대한 연구)

  • Cho, Byung-Chul;Oh, Do-Hoon;Bae, Hoon-Sik
    • Radiation Oncology Journal
    • /
    • v.16 no.1
    • /
    • pp.71-79
    • /
    • 1998
  • Purpose : The objective of this study is to introduce our installation of a non-commercial 3D Planning system, Plunc and confirm it's clinical applicability in various treatment situations. Materials and Methods : We obtained source codes of Plunc, offered by University of North Carolina and installed them on a Pentium Pro 200MHz (128MB RAM, Millenium VGA) with Linux operating system. To examine accuracy of dose distributions calculated by Plunc, we input beam data of 6MV Photon of our linear accelerator(Siemens MXE 6740) including tissue-maximum ratio, scatter-maximum ratio, attenuation coefficients and shapes of wedge filters. After then, we compared values of dose distributions(Percent depth dose; PDD, dose profiles with and without wedge filters, oblique incident beam, and dose distributions under air-gap) calculated by Plunc with measured values. Results : Plunc operated in almost real time except spending about 10 seconds in full volume dose distribution and dose-volume histogram(DVH) on the PC described above. As compared with measurements for irradiations of 90-cm 550 and 10-cm depth isocenter, the PDD curves calculated by Plunc did not exceed $1\%$ of inaccuracies except buildup region. For dose profiles with and without wedge filter, the calculated ones are accurate within $2\%$ except low-dose region outside irradiations where Plunc showed $5\%$ of dose reduction. For the oblique incident beam, it showed a good agreement except low dose region below $30\%$ of isocenter dose. In the case of dose distribution under air-gap, there was $5\%$ errors of the central-axis dose. Conclusion : By comparing photon dose calculations using the Plunc with measurements, we confirmed that Plunc showed acceptable accuracies about $2-5\%$ in typical treatment situations which was comparable to commercial planning systems using correction-based a1gorithms. Plunc does not have a function for electron beam planning up to the present. However, it is possible to implement electron dose calculation modules or more accurate photon dose calculation into the Plunc system. Plunc is shown to be useful to clear many limitations of 2D planning systems in clinics where a commercial 3D planning system is not available.

  • PDF

Development of a Feasibility Evaluation Model for Apartment Remodeling with the Number of Households Increasing at the Preliminary Stage (노후공동주택 세대수증가형 리모델링 사업의 기획단계 사업성평가 모델 개발)

  • Koh, Won-kyung;Yoon, Jong-sik;Yu, Il-han;Shin, Dong-woo;Jung, Dae-woon
    • Korean Journal of Construction Engineering and Management
    • /
    • v.20 no.4
    • /
    • pp.22-33
    • /
    • 2019
  • The government has steadily revised and developed laws and systems for activating remodeling of apartments in response to the problems of aged apartments. However, despite such efforts, remodeling has yet to be activated. For many reasons, this study noted that there were no tools for reasonable profitability judgements and decision making in the preliminary stages of the remodeling project. Thus, the feasibility evaluation model was developed. Generally, the profitability judgements are made after the conceptual design. However, decisions to drive remodeling projects are made at the preliminary stage. So a feasibility evaluation model is required at the preliminary stage. Accordingly, In this study, a feasibility evaluation model was developed for determining preliminary stage profitability. Construction costs, business expenses, financial expenses, and generally sales revenue were calculated using the initial available information and remodeling variables derived through the existing cases. Through this process, we developed an algorithm that can give an overview of the return on investment. In addition, the preliminary stage feasibility evaluation model developed was applied to three cases to verify the applicability of the model. Although applied in three cases, the difference between the model's forecast and actual case values is less than 5%, which is considered highly applicable. If cases are expanded in the future, it will be a useful tool that can be used in actual work. The feasibility evaluation model developed in this study will support decision making by union members, and if the model is applied in different regions, it will be expected to help local governments to understand the size of possible remodeling projects.

Validation of Surface Reflectance Product of KOMPSAT-3A Image Data: Application of RadCalNet Baotou (BTCN) Data (다목적실용위성 3A 영상 자료의 지표 반사도 성과 검증: RadCalNet Baotou(BTCN) 자료 적용 사례)

  • Kim, Kwangseob;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_2
    • /
    • pp.1509-1521
    • /
    • 2020
  • Experiments for validation of surface reflectance produced by Korea Multi-Purpose Satellite (KOMPSAT-3A) were conducted using Chinese Baotou (BTCN) data among four sites of the Radical Calibration Network (RadCalNet), a portal that provides spectrophotometric reflectance measurements. The atmosphere reflectance and surface reflectance products were generated using an extension program of an open-source Orfeo ToolBox (OTB), which was redesigned and implemented to extract those reflectance products in batches. Three image data sets of 2016, 2017, and 2018 were taken into account of the two sensor model variability, ver. 1.4 released in 2017 and ver. 1.5 in 2019, such as gain and offset applied to the absolute atmospheric correction. The results of applying these sensor model variables showed that the reflectance products by ver. 1.4 were relatively well-matched with RadCalNet BTCN data, compared to ones by ver. 1.5. On the other hand, the reflectance products obtained from the Landsat-8 by the USGS LaSRC algorithm and Sentinel-2B images using the SNAP Sen2Cor program were used to quantitatively verify the differences in those of KOMPSAT-3A. Based on the RadCalNet BTCN data, the differences between the surface reflectance of KOMPSAT-3A image were shown to be highly consistent with B band as -0.031 to 0.034, G band as -0.001 to 0.055, R band as -0.072 to 0.037, and NIR band as -0.060 to 0.022. The surface reflectance of KOMPSAT-3A also indicated the accuracy level for further applications, compared to those of Landsat-8 and Sentinel-2B images. The results of this study are meaningful in confirming the applicability of Analysis Ready Data (ARD) to the surface reflectance on high-resolution satellites.