• Title/Summary/Keyword: Calculate

Search Result 11,116, Processing Time 0.039 seconds

Development of Program for Renal Function Study with Quantification Analysis of Nuclear Medicine Image (핵의학 영상의 정량적 분석을 통한 신장기능 평가 프로그램 개발)

  • Song, Ju-Young;Lee, Hyoung-Koo;Suh, Tae-Suk;Choe, Bo-Young;Shinn, Kyung-Sub;Chung, Yong-An;Kim, Sung-Hoon;Chung, Soo-Kyo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.35 no.2
    • /
    • pp.89-99
    • /
    • 2001
  • Purpose: In this study, we developed a new software tool for the analysis of renal scintigraphy which can be modified more easily by a user who needs to study new clinical applications, and the appropriateness of the results from our program was studied. Materials and Methods: The analysis tool was programmed with IDL5.2 and designed for use on a personal computer running Windows. For testing the developed tool and studying the appropriateness of the calculated glomerular filtration rate (GFR), $^{99m}Tc$-DTPA was administered to 10 adults in normal condition. In order to study the appropriateness of the calculated mean transit time (MTT), $^{99m}Tc-DTPA\;and\;^{99m}Tc-MAG3$ were administered to 11 adults in normal condition and 22 kidneys were analyzed. All the images were acquired with ORBITOR. the Siemens gamma camera. Results: With the developed tool, we could show dynamic renal images and time activity curve (TAC) in each ROI and calculate clinical parameters of renal function. The results calculated by the developed tool were not different statistically from the results obtained by the Siemens application program (Tmax: p=0.68, Relative Renal Function: p:1.0, GFR: p=0.25) and the developed program proved reasonable. The MTT calculation tool proved to be reasonable by the evaluation of the influence of hydration status on MTT. Conclusion: We have obtained reasonable clinical parameters for the evaluation of renal function with the software tool developed in this study. The developed tool could prove more practical than conventional, commercial programs.

  • PDF

Safety Assessment of Estimated Daily Intakes of Antioxidants in Korean Using Dietary Survey Approach and Food Supply Survey Approach (식이를 통한 평가방법과 공급량 평가방법을 이용한 산화방지제 일일 추정 섭취량 안전성 평가)

  • Suh, Hee-Jae;Choi, Sung-Hee
    • Korean Journal of Food Science and Technology
    • /
    • v.42 no.6
    • /
    • pp.762-767
    • /
    • 2010
  • This study evaluated daily intakes of BHT, BHA, and TBHQ in Korean. The daily intakes were estimated using both a dietary survey approach and food supply survey approach. In the dietary survey approach, individual dietary intake data from the Korea National Health and Nutrition Survey in 2005, as well as analytical results of BHT in 131 samples, BHA in 134 samples, and TBHQ in 104 samples, were used to assess daily intakes of the antioxidants. In the food supply survey approach, both total production amounts of BHT, BHA and TBHQ and maximum permitted levels of the antioxidants were used to calculate daily intakes. In the dietary survey results, the average daily intakes of BHT, BHA and TBHQ were 0.8, 0.5, and 0.3 ${\mu}g$/kg body weight/day, respectively, and below 0.2% of the ADI (acceptable daily intake) set by JECFA (Joint FAO/WHO Expert committee on Food Additives). In the food supply survey approach, the average daily intakes of BHT, BHA,and TBHQ were all 0.3 mg/kg body weight/day. The ratios of ADI were 97, 60, and 40%, respectively. According to these results, daily intakes of BHT, BHA, and TBHQ in Korean are lower than the ADI.

A Methodology for Automatic Multi-Categorization of Single-Categorized Documents (단일 카테고리 문서의 다중 카테고리 자동확장 방법론)

  • Hong, Jin-Sung;Kim, Namgyu;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.77-92
    • /
    • 2014
  • Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we propose a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. First, we attempt to find the relationship between documents and topics by using the result of topic analysis for single-categorized documents. Second, we construct a correspondence table between topics and categories by investigating the relationship between them. Finally, we calculate the matching scores for each document to multiple categories. The results imply that a document can be classified into a certain category if and only if the matching score is higher than the predefined threshold. For example, we can classify a certain document into three categories that have larger matching scores than the predefined threshold. The main contribution of our study is that our methodology can improve the applicability of traditional multi-category classifiers by generating multi-categorized documents from single-categorized documents. Additionally, we propose a module for verifying the accuracy of the proposed methodology. For performance evaluation, we performed intensive experiments with news articles. News articles are clearly categorized based on the theme, whereas the use of vulgar language and slang is smaller than other usual text document. We collected news articles from July 2012 to June 2013. The articles exhibit large variations in terms of the number of types of categories. This is because readers have different levels of interest in each category. Additionally, the result is also attributed to the differences in the frequency of the events in each category. In order to minimize the distortion of the result from the number of articles in different categories, we extracted 3,000 articles equally from each of the eight categories. Therefore, the total number of articles used in our experiments was 24,000. The eight categories were "IT Science," "Economy," "Society," "Life and Culture," "World," "Sports," "Entertainment," and "Politics." By using the news articles that we collected, we calculated the document/category correspondence scores by utilizing topic/category and document/topics correspondence scores. The document/category correspondence score can be said to indicate the degree of correspondence of each document to a certain category. As a result, we could present two additional categories for each of the 23,089 documents. Precision, recall, and F-score were revealed to be 0.605, 0.629, and 0.617 respectively when only the top 1 predicted category was evaluated, whereas they were revealed to be 0.838, 0.290, and 0.431 when the top 1 - 3 predicted categories were considered. It was very interesting to find a large variation between the scores of the eight categories on precision, recall, and F-score.

The Distribution and Geomorphic Changes of Natural Lakes in East Coast Korea (한반도 동해안의 자연호 분포와 지형 환경 변화)

  • Lee, Min-Boo;Kim, Nam-Shin;Lee, Gwang-Ryul
    • Journal of the Korean association of regional geographers
    • /
    • v.12 no.4
    • /
    • pp.449-460
    • /
    • 2006
  • This study aims to analyze distribution of natural lakes including lagoonal lake(lagoon) and tributary dammed lake(tributary lake) and calculate the size, morphology in order to interpret time-serial change of lakes using methodology of remote sensing images(1990s), GIS and topographic maps(1920s) in east coast of Korean Peninsular. Analysis results show that in 1990s, there are 57 natural lakes, with the total size of $75.62km^2$ over size $0.01km^2$. marine-origin lagoons are 48 with total size of $64.85km^2$, composing 85% of total natural lake, and the largest lagoon is Beonpo in Raseon City. Tributary lakes have been formed by damming of tributary channels by fluvial sand bars from main stream, located nearby at coastal zone, similar to lagoon sites. Large tributary lake, Jangyeonho, is developed in lava plateau dissection valley of Eorang Gun, Hamnam Province. There are more distributed at Duman River mouth$\sim$Cheongjin City, Heungnam City$\sim$Hodo Peninsular and Anbyeon Gun$\sim$Gangreung City. Geomorphometrically, correlation of size to circumference is very high, but correlation of size to shape irregularity is very low. The direction of lagoonal coast, NW-SE and NE-SW are predominated due to direction of tectonic structure and longshore currents. The length of the river into lake are generally short, maximum under 15km, and lake size is smaller, degree of size decreasing is higher. Geomorphic patterns of the lake location are classified as coast-hill range, coastal plain, coastal plain-channel valley, coastal plain-hill range and channel valley-hill range. During from 1920s to 1990s, change with lake size decreasing is highest at coastal plain-channel valley, next is coastal plain. Causes of the size decreasing are fluvial deposition from upper rivers and human impacts such as reclamation.

  • PDF

Scaling up of single fracture using a spectral analysis and computation of its permeability coefficient (스펙트럼 분석을 응용한 단일 균열 규모확장과 투수계수 산정)

  • 채병곤
    • The Journal of Engineering Geology
    • /
    • v.14 no.1
    • /
    • pp.29-46
    • /
    • 2004
  • It is important to identify geometries of fracture that act as a conduit of fluid flow for characterization of ground water flow in fractured rock. Fracture geometries control hydraulic conductivity and stream lines in a rock mass. However, we have difficulties to acquire whole geometric data of fractures in a field scale because of discontinuous distribution of outcrops and impossibility of continuous collecting of subsurface data. Therefore, it is needed to develop a method to describe whole feature of a target fracture geometry. This study suggests a new approach to develop a method to characterize on the whole feature of a target fracture geometry based on the Fourier transform. After sampling of specimens along a target fracture from borehole cores, effective frequencies among roughness components were selected by the Fourier transform on each specimen. Then, the selected effective frequencies were averaged on each frequency. Because the averaged spectrum includes all the frequency profiles of each specimen, it shows the representative components of the fracture roughness of the target fracture. The inverse Fourier transform is conducted to reconstruct an averaged whole roughness feature after low pass filtering. The reconstructed roughness feature also shows the representative roughness of the target subsurface fracture including the geometrical characteristics of each specimen. It also means that overall roughness feature by scaling up of a fracture. In order to identify the characteristics of permeability coefficients along the target fracture, fracture models were constructed based on the reconstructed roughness feature. The computation of permeability coefficient was performed by the homogenization analysis that can calculate accurate permeability coefficients with full consideration of fracture geometry. The results show a range between $10^{-4}{\;}and{\;}10^{-3}{\;}cm/sec$, indicating reasonable values of permeability coefficient along a large fracture. This approach will be effectively applied to the analysis of permeability characteristics along a large fracture as well as identification of the whole feature of a fracture in a field scale.

Calculation of Expected Sliding Distance of Concrete Caisson of Vertical Breakwater Considering Variability in Wave Direction (파향의 변동성을 고려한 직립방파제 콘크리트 케이슨의 기대활동량 산정)

  • 홍수영;서경덕;권혁민
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.16 no.1
    • /
    • pp.27-38
    • /
    • 2004
  • In this study, the reliability design method developed by Shimosako and Takahashi in 1999 for calculation of the expected sliding distance of the caisson of a vertical breakwater is extended to take into account the variability in wave direction such as directional spreading of waves, obliquity of the deep-water design principal wave direction from the shore-normal direction, and its variation about the design value. To calculate the transformation of random directional waves, the model developed by Kweon et al. in 1997 is used instead of Goda's model, which was developed in 1975 for unidirectional random waves normally incident to a straight coast with parallel depth contours and has been used by Shimosako and Takahashi. The effects of directional spreading and the variation of deep-water principal wave directions were minor compared with those of the obliquity of the deep-water design principal wave direction from the shore-normal direction, which tends to reduce the expected sliding distance as it increases. Especially when we used the field data in a part of east coast of Korea, considering the variability in wave directions reduced the expected sliding distance to about one third of that not considering the directional variability. Reducing the significant wave height calculated at the design site by 6% to correct the effect of wave refraction neglected in using Goda's model was found to be proper when the deep-water design principal wave direction is about 20 degrees. When it is smaller than 20 degrees, a value smaller than 6% should be used, or vice versa. When we designed the caisson with the expected sliding distance to be 30㎝, in the area of water depth of 25 m or smaller, we could reduce the caisson width by about 30% at the maximum compared with the deterministic design, even if we did not consider the variability in wave directions. When we used the field data in a part of east coast of Korea, considering the variability in wave directions reduced the necessary caisson width by about 10% at the maximum compared with that not considering the directional variability, and is needed a caisson width smaller than that of the deterministic design in the whole range of water depth considered (10∼30 m).

Using Effective Temperatures to Determine Safety Cultivation Season in Direct Seeding Rice on Dry Paddy (작물생육 유효기온 출현시기를 이용한 건답직파 벼의 지역별 안전작기 설정)

  • 최돈향;윤경민;윤성호;박무언
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.42 no.6
    • /
    • pp.666-672
    • /
    • 1997
  • Twenty years' daily mean air temperature data was used to calculate the critical early seeding date(CESD), the optimum heading date(OHD), the critical late heading date for stable ripening(CHDR) and the critical late ripening date(CLRD) for rice seeded on dry paddy in different agroclimatic zones in Korea. The CESD was defined as the first day with mean air temperature of 13$^{\circ}C$, and the OHD as the first day of the 40 consecutive days with mean air temperature of 22$^{\circ}C$ or above after heading. The CHDR was defined as the date after which the cumulative daily mean air temperature would be at least 76$0^{\circ}C$. Lastly, the CLRD was defined as the last day when daily mean air temperature remains above 15$^{\circ}C$. This information was used for the estimation of periods from the earliest date of seeding to optimum heading date, the latest possible date of heading and the latest possible date of ripening in respective regions. For instance, in Suwon, those respective periods mentioned were found to be 104days, 124days, and 165days.

  • PDF

Change in Yield and Quality Characteristics of Rice by Drought Treatment Time during the Seedling Stage (벼 이앙 직후 유묘기 한발 피해시기에 따른 수량 및 미질 특성 변화)

  • Jo, Sumin;Cho, Jun-Hyeon;Lee, Ji-Yoon;Kwon, Young-Ho;Kang, Ju-Won;Lee, Sais-Beul;Kim, Tae-Heon;Lee, Jong-Hee;Park, Dong-Soo;Lee, Jeom-Sig;Ko, Jong-Min
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.64 no.4
    • /
    • pp.344-352
    • /
    • 2019
  • Drought stress caused by global climate change is a serious problem for rice cultivation. Increasingly frequent abnormal weather occurrences could include severe drought, which could cause water stress to rice during the seedling stage. This experiment was conducted to clarify the effects of drought during the seedling period on yield and quality of rice. Drought conditions were created in a rain shelter house facility. The drought treatment was conducted at 3, 10, and 20 days after transplanting. Soil water content was measured by a soil moisture sensor during the whole growth stage. In this study, we have chosen 3 rice cultivars which are widely cultivated in Korea: 'Haedamssal' (Early maturing), 'Samkwang' (Medium maturing), and 'Saenuri' (Mid-late maturing). The decrease in yield due to drought treatment was most severe 3 days after transplanting because of the decrease in the number of effective tillers. The decrease in grain quality due to drought treatment was also most severe 3 days after transplanting because of the increased protein content and hardness of the grains. The cultivar 'Haedamssal' was the most severely damaged by water stress, resulting in about a 30% yield loss. Drought conditions diminished the early vigorous growth period and days to heading in early-maturing cultivars. The results show that drought stress affects yield components immediately after transplanting, which is a decisive factor in reducing yield and grain quality. This study can be used as basic data to calculate damage compensation for drought damage on actual rice farms.

Computing the Dosage and Analysing the Effect of Optimal Rechlorination for Adequate Residual Chlorine in Water Distribution System (배.급수관망의 잔류염소 확보를 위한 적정 재염소 주입량 산정 및 효과분석)

  • Kim, Do-Hwan;Lee, Doo-Jin;Kim, Kyoung-Pil;Bae, Chul-Ho;Joo, Hye-Eun
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.32 no.10
    • /
    • pp.916-927
    • /
    • 2010
  • In general water treatment process, the disinfection process by chlorine is used to prevent water borne disease and microbial regrowth in water distribution system. Because chlorines were reacted with organic matter, carcinogens such as disinfection by-products (DBPs) were produced in drinking water. Therefore, a suitable injection of chlorine is need to decrease DBPs. Rechlorination in water pipelines or reservoirs are recently increased to secure the residual chlorine in the end of water pipelines. EPANET 2.0 developed by the U.S. Environmental Protection Agency (EPA) is used to compute the optimal chlorine injection in water treatment plant and to predict the dosage of rechlorination into water distribution system. The bulk decay constant ($k_{bulk}$) was drawn by bottle test and the wall decay constant ($k_{wall}$) was derived from using systermatic analysis method for water quality modeling in target region. In order to predict water quality based on hydraulic analysis model, residual chlorine concentration was forecasted in water distribution system. The formation of DBPs such as trihalomethanes (THMs) was verified with chlorine dosage in lab-scale test. The bulk decay constant ($k_{bulk}$) was rapidly decreased with increasing temperature in the early time. In the case of 25 degrees celsius, the bulk decay constant ($k_{bulk}$) decreased over half after 25 hours later. In this study, there were able to calculate about optimal rechlorine dosage and select on profitable sites in the network map.

The Individual Discrimination Location Tracking Technology for Multimodal Interaction at the Exhibition (전시 공간에서 다중 인터랙션을 위한 개인식별 위치 측위 기술 연구)

  • Jung, Hyun-Chul;Kim, Nam-Jin;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.19-28
    • /
    • 2012
  • After the internet era, we are moving to the ubiquitous society. Nowadays the people are interested in the multimodal interaction technology, which enables audience to naturally interact with the computing environment at the exhibitions such as gallery, museum, and park. Also, there are other attempts to provide additional service based on the location information of the audience, or to improve and deploy interaction between subjects and audience by analyzing the using pattern of the people. In order to provide multimodal interaction service to the audience at the exhibition, it is important to distinguish the individuals and trace their location and route. For the location tracking on the outside, GPS is widely used nowadays. GPS is able to get the real time location of the subjects moving fast, so this is one of the important technologies in the field requiring location tracking service. However, as GPS uses the location tracking method using satellites, the service cannot be used on the inside, because it cannot catch the satellite signal. For this reason, the studies about inside location tracking are going on using very short range communication service such as ZigBee, UWB, RFID, as well as using mobile communication network and wireless lan service. However these technologies have shortcomings in that the audience needs to use additional sensor device and it becomes difficult and expensive as the density of the target area gets higher. In addition, the usual exhibition environment has many obstacles for the network, which makes the performance of the system to fall. Above all these things, the biggest problem is that the interaction method using the devices based on the old technologies cannot provide natural service to the users. Plus the system uses sensor recognition method, so multiple users should equip the devices. Therefore, there is the limitation in the number of the users that can use the system simultaneously. In order to make up for these shortcomings, in this study we suggest a technology that gets the exact location information of the users through the location mapping technology using Wi-Fi and 3d camera of the smartphones. We applied the signal amplitude of access point using wireless lan, to develop inside location tracking system with lower price. AP is cheaper than other devices used in other tracking techniques, and by installing the software to the user's mobile device it can be directly used as the tracking system device. We used the Microsoft Kinect sensor for the 3D Camera. Kinect is equippedwith the function discriminating the depth and human information inside the shooting area. Therefore it is appropriate to extract user's body, vector, and acceleration information with low price. We confirm the location of the audience using the cell ID obtained from the Wi-Fi signal. By using smartphones as the basic device for the location service, we solve the problems of additional tagging device and provide environment that multiple users can get the interaction service simultaneously. 3d cameras located at each cell areas get the exact location and status information of the users. The 3d cameras are connected to the Camera Client, calculate the mapping information aligned to each cells, get the exact information of the users, and get the status and pattern information of the audience. The location mapping technique of Camera Client decreases the error rate that occurs on the inside location service, increases accuracy of individual discrimination in the area through the individual discrimination based on body information, and establishes the foundation of the multimodal interaction technology at the exhibition. Calculated data and information enables the users to get the appropriate interaction service through the main server.