• Title/Summary/Keyword: Information code

Search Result 6,012, Processing Time 0.036 seconds

A Study on the dose distribution produced by $^{32}$ P source form in treatment for inhibiting restenosis of coronary artery (관상동맥 재협착 방지를 위한 치료에서 $^{32}$ P 핵종의 선원 형태에 따른 선량분포에 관한 연구)

  • 김경화;김영미;박경배
    • Progress in Medical Physics
    • /
    • v.10 no.1
    • /
    • pp.1-7
    • /
    • 1999
  • In this study, the dose distributions of a $^{32}$ p uniform cylindrical volume source and a surface source, a pure $\beta$emitter, were calculated in order to obtain information relevant to the utilization of a balloon catheter and a radioactive stent. The dose distributions of $^{32}$ p were calculated by means of the EGS4 code system. The sources are considered to be distributed uniformly in the volume and on the surface in the form of a cylinder with a radius of 1.5 mm and length of 20 mm. The energy of $\beta$particles emitted is chosen at random in the $\beta$ energy spectrum evaluated by the solution of the Dirac equation for the Coulomb potential. Liquid water is used to simulate the particle transport in the human body. The dose rates in a target at a 0.5mm radial distance from the surface of cylindrical volume and surface source are 12.133 cGy/s per GBq (0.449 cGy/s per mCi, uncertainty: 1.51%) and 24.732 cGy/s per GBq (0.915 cGy/s per mCi, uncertainty: 1.01%), respectively. The dose rates in the two sources decrease with distance in both radial and axial direction. On the basis of the above results, the determined initial activities were 29.69 mCi and 1.2278 $\mu$Ci for the balloon catheter and the radioactive stent using $^{32}$ P isotope, respectively. The total absorbed dose for optimal therapeutic regimen is considered to be 20 Gy and the treatment time in the case of the balloon catheter is less than 3 min. Absorbed doses in targets placed in a radial direction for the two sources were also calculated when it expressed initial activity in a 1 mCi/ml volume activity density for the cylindrical volume source and a 0.1 mCi/cm$^2$ area activity density for the surface source. The absorbed dose distribution around the $^{32}$ P cylindrical source with different size can be easily calculated using our results when the volume activity density and area activity density for the source are known.

  • PDF

Sensitivity Analysis of Meteorology-based Wildfire Risk Indices and Satellite-based Surface Dryness Indices against Wildfire Cases in South Korea (기상기반 산불위험지수와 위성기반 지면건조지수의 우리나라 산불발생에 대한 민감도분석)

  • Kong, Inhak;Kim, Kwangjin;Lee, Yangwon
    • Journal of Cadastre & Land InformatiX
    • /
    • v.47 no.2
    • /
    • pp.107-120
    • /
    • 2017
  • There are many wildfire risk indices worldwide, but objective comparisons between such various wildfire risk indices and surface dryness indices have not been conducted for the wildfire cases in Korea. This paper describes a sensitivity analysis on the wildfire risk indices and surface dryness indices for Korea using LDAPS(Local Analysis and Prediction System) meteorological dataset on a 1.5-km grid and MODIS(Moderate-resolution Imaging Spectroradiometer) satellite images on a 1-km grid. We analyzed the meteorology-based wildfire risk indices such as the Australian FFDI(forest fire danger index), the Canadian FFMC(fine fuel moisture code), the American HI(Haines index), and the academically presented MNI(modified Nesterov index). Also we examined the satellite-based surface dryness indices such as NDDI(normalized difference drought index) and TVDI(temperature vegetation dryness index). As a result of the comparisons between the six indices regarding 120 wildfire cases with the area damaged over 1ha during the period between January 2013 and May 2017, we found that the FFDI and FFMC showed a good predictability for most wildfire cases but the MNI and TVDI were not suitable for Korea. The NDDI can be used as a proxy parameter for wildfire risk because its average CDF(cumulative distribution function) scores were stably high irrespective of fire size. The indices tested in this paper should be carefully chosen and used in an integrated way so that they can contribute to wildfire forecasting in Korea.

Research for Application of Interactive Data Broadcasting Service in DMB (DMB에서의 양방향 데어터방송 서비스도입에 관한 연구)

  • Kim, Jong-Geun;Choe, Seong-Jin;Lee, Seon-Hui
    • Broadcasting and Media Magazine
    • /
    • v.11 no.4
    • /
    • pp.104-117
    • /
    • 2006
  • In this Paper, we analyze the application of Interactive Data Broadcasting in DMB(Digital Multimedia Broadcasting) in the accordance with convergence of service and technology. With the acceleration of digital convergence in the Ubiquitous period substantial development of digital media technology and convergence of broadcasting and telecommunication industry are being witnessed. Consequently these results gave rise to newly combined-products such as DMB(Digital Multimedia Broadcasting), WCDMA(Wide-band code division multiple access), Wibro(Wireless Broadband Internet), IP-TV (Internet protocol TV) and HSDPA(High speed downlink packet access). The preparatory stage for the implementation of Interactive Data Broadcasting Service will be reached by the end of December, 2006. DMB is the first result of a successful convergence service between Broadcasting and Telecommunication in new media era. Multimedia technology and services are the core elements of DMB. The Data Broadcasting will not only offer various services of interactive information such News, Weather, Broadcasting Program etc, but also be linked with characteristic function of mobile phone such as calling and SMS(Short Message Service) via Return Channel.

Study on the LOWTRAN7 Simulation of the Atmospheric Radiative Transfer Using CAGEX Data. (CAGEX 관측자료를 이용한 LOWTRAN7의 대기 복사전달 모의에 대한 조사)

  • 장광미;권태영;박경윤
    • Korean Journal of Remote Sensing
    • /
    • v.13 no.2
    • /
    • pp.99-120
    • /
    • 1997
  • Solar radiation is scattered and absorbed atmospheric compositions in the atmosphere before it reaches the surface and, then after reflected at the surface, until it reaches the satellite sensor. Therefore, consideration of the radiative transfer through the atmosphere is essential for the quantitave analysis of the satellite sensed data, specially at shortwave region. This study examined a feasibility of using radiative transfer code for estimating the atmospheric effects on satellite remote sensing data. To do this, the flux simulated by LOWTRAN7 is compared with CAGEX data in shortwave region. The CAGEX (CERES/ARM/GEWEX Experiment) data provides a dataset of (1) atmospheric soundings, aerosol optical depth and albedo, (2) ARM(Aerosol Radiation Measurement) radiation flux measured by pyrgeometers, pyrheliometer and shadow pyranometer and (3) broadband shortwave flux simulated by Fu-Liou's radiative transfer code. To simulate aerosol effect using the radiative transfer model, the aerosol optical characteristics were extracted from observed aerosol column optical depth, Spinhirne's experimental vertical distribution of scattering coefficient and D'Almeida's statistical atmospheric aerosols radiative characteristics. Simulation of LOWTRAN7 are performed on 31 sample of completely clear days. LOWTRAN's result and CAGEX data are compared on upward, downward direct, downward diffuse solar flux at the surface and upward solar flux at the top of the atmosphere(TOA). The standard errors in LOWTRAN7 simulation of the above components are within 5% except for the downward diffuse solar flux at the surface(6.9%). The results show that a large part of error in LOWTRAN7 flux simulation appeared in the diffuse component due to scattering mainly by atmispheric aerosol. For improving the accuracy of radiative transfer simulation by model, there is a need to provide better information about the radiative charateristrics of atmospheric aerosols.

Development of Market Growth Pattern Map Based on Growth Model and Self-organizing Map Algorithm: Focusing on ICT products (자기조직화 지도를 활용한 성장모형 기반의 시장 성장패턴 지도 구축: ICT제품을 중심으로)

  • Park, Do-Hyung;Chung, Jaekwon;Chung, Yeo Jin;Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.1-23
    • /
    • 2014
  • Market forecasting aims to estimate the sales volume of a product or service that is sold to consumers for a specific selling period. From the perspective of the enterprise, accurate market forecasting assists in determining the timing of new product introduction, product design, and establishing production plans and marketing strategies that enable a more efficient decision-making process. Moreover, accurate market forecasting enables governments to efficiently establish a national budget organization. This study aims to generate a market growth curve for ICT (information and communication technology) goods using past time series data; categorize products showing similar growth patterns; understand markets in the industry; and forecast the future outlook of such products. This study suggests the useful and meaningful process (or methodology) to identify the market growth pattern with quantitative growth model and data mining algorithm. The study employs the following methodology. At the first stage, past time series data are collected based on the target products or services of categorized industry. The data, such as the volume of sales and domestic consumption for a specific product or service, are collected from the relevant government ministry, the National Statistical Office, and other relevant government organizations. For collected data that may not be analyzed due to the lack of past data and the alteration of code names, data pre-processing work should be performed. At the second stage of this process, an optimal model for market forecasting should be selected. This model can be varied on the basis of the characteristics of each categorized industry. As this study is focused on the ICT industry, which has more frequent new technology appearances resulting in changes of the market structure, Logistic model, Gompertz model, and Bass model are selected. A hybrid model that combines different models can also be considered. The hybrid model considered for use in this study analyzes the size of the market potential through the Logistic and Gompertz models, and then the figures are used for the Bass model. The third stage of this process is to evaluate which model most accurately explains the data. In order to do this, the parameter should be estimated on the basis of the collected past time series data to generate the models' predictive value and calculate the root-mean squared error (RMSE). The model that shows the lowest average RMSE value for every product type is considered as the best model. At the fourth stage of this process, based on the estimated parameter value generated by the best model, a market growth pattern map is constructed with self-organizing map algorithm. A self-organizing map is learning with market pattern parameters for all products or services as input data, and the products or services are organized into an $N{\times}N$ map. The number of clusters increase from 2 to M, depending on the characteristics of the nodes on the map. The clusters are divided into zones, and the clusters with the ability to provide the most meaningful explanation are selected. Based on the final selection of clusters, the boundaries between the nodes are selected and, ultimately, the market growth pattern map is completed. The last step is to determine the final characteristics of the clusters as well as the market growth curve. The average of the market growth pattern parameters in the clusters is taken to be a representative figure. Using this figure, a growth curve is drawn for each cluster, and their characteristics are analyzed. Also, taking into consideration the product types in each cluster, their characteristics can be qualitatively generated. We expect that the process and system that this paper suggests can be used as a tool for forecasting demand in the ICT and other industries.

A Study on the Method of Minimizing the Bit-Rate Overhead of H.264 Video when Encrypting the Region of Interest (관심영역 암호화 시 발생하는 H.264 영상의 비트레이트 오버헤드 최소화 방법 연구)

  • Son, Dongyeol;Kim, Jimin;Ji, Cheongmin;Kim, Kangseok;Kim, Kihyung;Hong, Manpyo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.2
    • /
    • pp.311-326
    • /
    • 2018
  • This paper has experimented using News sample video with QCIF ($176{\times}144$) resolution in JM v10.2 code of H.264/AVC-MPEG. The region of interest (ROI) to be encrypted occurred the drift by unnecessarily referring to each frame continuously in accordance with the characteristics of the motion prediction and compensation of the H.264 standard. In order to mitigate the drift, the latest related research method of re-inserting encrypted I-picture into a certain period leads to an increase in the amount of additional computation that becomes the factor increasing the bit-rate overhead of the entire video. Therefore, the reference search range of the block and the frame in the ROI to be encrypted is restricted in the motion prediction and compensation for each frame, and the reference search range in the non-ROI not to be encrypted is not restricted to maintain the normal encoding efficiency. In this way, after encoding the video with restricted reference search range, this article proposes a method of RC4 bit-stream encryption for the ROI such as the face to be able to identify in order to protect personal information in the video. Also, it is compared and analyzed the experimental results after implementing the unencrypted original video, the latest related research method, and the proposed method in the condition of the same environment. In contrast to the latest related research method, the bit-rate overhead of the proposed method is 2.35% higher than that of the original video and 14.93% lower than that of the latest related method, while mitigating temporal drift through the proposed method. These improved results have verified by experiments of this study.

A Scalable and Modular Approach to Understanding of Real-time Software: An Architecture-based Software Understanding(ARSU) and the Software Re/reverse-engineering Environment(SRE) (실시간 소프트웨어의 조절적${\cdot}$단위적 이해 방법 : ARSU(Architecture-based Software Understanding)와 SRE(Software Re/reverse-engineering Environment))

  • Lee, Moon-Kun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.12
    • /
    • pp.3159-3174
    • /
    • 1997
  • This paper reports a research to develop a methodology and a tool for understanding of very large and complex real-time software. The methodology and the tool mostly developed by the author are called the Architecture-based Real-time Software Understanding (ARSU) and the Software Re/reverse-engineering Environment (SRE) respectively. Due to size and complexity, it is commonly very hard to understand the software during reengineering process. However the research facilitates scalable re/reverse-engineering of such real-time software based on the architecture of the software in three-dimensional perspectives: structural, functional, and behavioral views. Firstly, the structural view reveals the overall architecture, specification (outline), and the algorithm (detail) views of the software, based on hierarchically organized parent-chi1d relationship. The basic building block of the architecture is a software Unit (SWU), generated by user-defined criteria. The architecture facilitates navigation of the software in top-down or bottom-up way. It captures the specification and algorithm views at different levels of abstraction. It also shows the functional and the behavioral information at these levels. Secondly, the functional view includes graphs of data/control flow, input/output, definition/use, variable/reference, etc. Each feature of the view contains different kind of functionality of the software. Thirdly, the behavioral view includes state diagrams, interleaved event lists, etc. This view shows the dynamic properties or the software at runtime. Beside these views, there are a number of other documents: capabilities, interfaces, comments, code, etc. One of the most powerful characteristics of this approach is the capability of abstracting and exploding these dimensional information in the architecture through navigation. These capabilities establish the foundation for scalable and modular understanding of the software. This approach allows engineers to extract reusable components from the software during reengineering process.

  • PDF

Spatio-temporal enhancement of forest fire risk index using weather forecast and satellite data in South Korea (기상 예보 및 위성 자료를 이용한 우리나라 산불위험지수의 시공간적 고도화)

  • KANG, Yoo-Jin;PARK, Su-min;JANG, Eun-na;IM, Jung-ho;KWON, Chun-Geun;LEE, Suk-Jun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.4
    • /
    • pp.116-130
    • /
    • 2019
  • In South Korea, forest fire occurrences are increasing in size and duration due to various factors such as the increase in fuel materials and frequent drying conditions in forests. Therefore, it is necessary to minimize the damage caused by forest fires by appropriately providing the probability of forest fire risk. The purpose of this study is to improve the Daily Weather Index(DWI) provided by the current forest fire forecasting system in South Korea. A new Fire Risk Index(FRI) is proposed in this study, which is provided in a 5km grid through the synergistic use of numerical weather forecast data, satellite-based drought indices, and forest fire-prone areas. The FRI is calculated based on the product of the Fine Fuel Moisture Code(FFMC) optimized for Korea, an integrated drought index, and spatio-temporal weighting approaches. In order to improve the temporal accuracy of forest fire risk, monthly weights were applied based on the forest fire occurrences by month. Similarly, spatial weights were applied using the forest fire density information to improve the spatial accuracy of forest fire risk. In the time series analysis of the number of monthly forest fires and the FRI, the relationship between the two were well simulated. In addition, it was possible to provide more spatially detailed information on forest fire risk when using FRI in the 5km grid than DWI based on administrative units. The research findings from this study can help make appropriate decisions before and after forest fire occurrences.

A Road Luminance Measurement Application based on Android (안드로이드 기반의 도로 밝기 측정 어플리케이션 구현)

  • Choi, Young-Hwan;Kim, Hongrae;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.49-55
    • /
    • 2015
  • According to the statistics of traffic accidents over recent 5 years, traffic accidents during the night times happened more than the day times. There are various causes to occur traffic accidents and the one of the major causes is inappropriate or missing street lights that make driver's sight confused and causes the traffic accidents. In this paper, with smartphones, we designed and implemented a lane luminance measurement application which stores the information of driver's location, driving, and lane luminance into database in real time to figure out the inappropriate street light facilities and the area that does not have any street lights. This application is implemented under Native C/C++ environment using android NDK and it improves the operation speed than code written in Java or other languages. To measure the luminance of road, the input image with RGB color space is converted to image with YCbCr color space and Y value returns the luminance of road. The application detects the road lane and calculates the road lane luminance into the database sever. Also this application receives the road video image using smart phone's camera and improves the computational cost by allocating the ROI(Region of interest) of input images. The ROI of image is converted to Grayscale image and then applied the canny edge detector to extract the outline of lanes. After that, we applied hough line transform method to achieve the candidated lane group. The both sides of lane is selected by lane detection algorithm that utilizes the gradient of candidated lanes. When the both lanes of road are detected, we set up a triangle area with a height 20 pixels down from intersection of lanes and the luminance of road is estimated from this triangle area. Y value is calculated from the extracted each R, G, B value of pixels in the triangle. The average Y value of pixels is ranged between from 0 to 100 value to inform a luminance of road and each pixel values are represented with color between black and green. We store car location using smartphone's GPS sensor into the database server after analyzing the road lane video image with luminance of road about 60 meters ahead by wireless communication every 10 minutes. We expect that those collected road luminance information can warn drivers about safe driving or effectively improve the renovation plans of road luminance management.

Implementation of PersonalJave™ AWT using Light-weight Window Manager (경량 윈도우 관리기를 이용한 퍼스널자바 AWT 구현)

  • Kim, Tae-Hyoun;Kim, Kwang-Young;Kim, Hyung-Soo;Sung, Min-Young;Chang, Nae-Hyuck;Shin, Heon-Shik
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.3
    • /
    • pp.240-247
    • /
    • 2001
  • Java is a promising runtime environment for embedded systems because it has many advantages such as platform independence, high security and support for multi-threading. One of the most famous Java run-time environments, Sun's ($PersonalJave^{TM}$) is based on Truffle architecture, which enables programmers to design various GUIs easily. For this reason, it has been ported to various embedded systems such as set-top boxes and personal digital assistants(PDA's). Basically, Truffle uses heavy-weight window managers such as Microsoft vVin32 API and X-Window. However, those window managers are not adequate for embedded systems because they require a large amount of memory and disk space. To come up with the requirements of embedded systems, we adopt Microwindows as the platform graphic system for Personal] ava A WT onto Embedded Linux. Although Microwindows is a light-weight window manager, it provides as powerful API as traditional window managers. Because Microwindows does not require any support from other graphics systems, it can be easily ported to various platforms. In addition, it is an open source code software. Therefore, we can easily modify and extend it as needed. In this paper, we implement Personal]ava A WT using Microwindows on embedded Linux and prove the efficiency of our approach.

  • PDF