• Title/Summary/Keyword: Analysis algorithm

Search Result 12,357, Processing Time 0.046 seconds

Improvement of 2-pass DInSAR-based DEM Generation Method from TanDEM-X bistatic SAR Images (TanDEM-X bistatic SAR 영상의 2-pass 위성영상레이더 차분간섭기법 기반 수치표고모델 생성 방법 개선)

  • Chae, Sung-Ho
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.847-860
    • /
    • 2020
  • The 2-pass DInSAR (Differential Interferometric SAR) processing steps for DEM generation consist of the co-registration of SAR image pair, interferogram generation, phase unwrapping, calculation of DEM errors, and geocoding, etc. It requires complicated steps, and the accuracy of data processing at each step affects the performance of the finally generated DEM. In this study, we developed an improved method for enhancing the performance of the DEM generation method based on the 2-pass DInSAR technique of TanDEM-X bistatic SAR images was developed. The developed DEM generation method is a method that can significantly reduce both the DEM error in the unwrapped phase image and that may occur during geocoding step. The performance analysis of the developed algorithm was performed by comparing the vertical accuracy (Root Mean Square Error, RMSE) between the existing method and the newly proposed method using the ground control point (GCP) generated from GPS survey. The vertical accuracy of the DInSAR-based DEM generated without correction for the unwrapped phase error and geocoding error is 39.617 m. However, the vertical accuracy of the DEM generated through the proposed method is 2.346 m. It was confirmed that the DEM accuracy was improved through the proposed correction method. Through the proposed 2-pass DInSAR-based DEM generation method, the SRTM DEM error observed by DInSAR was compensated for the SRTM 30 m DEM (vertical accuracy 5.567 m) used as a reference. Through this, it was possible to finally create a DEM with improved spatial resolution of about 5 times and vertical accuracy of about 2.4 times. In addition, the spatial resolution of the DEM generated through the proposed method was matched with the SRTM 30 m DEM and the TanDEM-X 90m DEM, and the vertical accuracy was compared. As a result, it was confirmed that the vertical accuracy was improved by about 1.7 and 1.6 times, respectively, and more accurate DEM generation was possible with the proposed method. If the method derived in this study is used to continuously update the DEM for regions with frequent morphological changes, it will be possible to update the DEM effectively in a short time at low cost.

Clinical Analysis of Disease Recurrence for the Patients with Secondary Spontaneous Pneumothorax (이차성 자연기흉 환자의 재발양상에 관한 분석)

  • Ryu, Kyoung-Min;Kim, Sam-Hyun;Seo, Pil-Won;Park, Seong-Sik;Ryu, Jae-Wook;Kim, Hyun-Jung
    • Journal of Chest Surgery
    • /
    • v.41 no.5
    • /
    • pp.619-624
    • /
    • 2008
  • Background: Secondary spontaneous pneumothorax is caused by various underlying lung diseases, and this is despite that primary spontaneous pneumotherax is caused by rupture of subpleural blebs. The treatment algorithm for secondary pneumothorax is different from that for primary pneumothorax. We studied the recurrence rate, the characteristics of recurrence and the treatment outcomes of the patients with secondary spontaneous pneumothorax. Material and Method: Between March 2005 to March 2007, 85 patients were treated for their first episodes of secondary spontaneous pneumothorax. We analyzed the characteristics and factors for recurrence of secondary spontaneous pneumothorax by conducting a retrospective review of the medical records. Result: The most common underlying lung disease was pulmonary tuberculosis (49.4%), and the second was chronic obstructive lung disease (27.6%), The recurrence rate was 47.1% (40/85). The second and third recurrence rates were 10.9% and 3.5%, respectively. The mean follow up period was $21.1{\pm}6.7$ months (range: $0{\sim}36$ month). For the recurrence cases, 70.5% of them occurred within a year after the first episode. The success rates according to the treatment modalities were thoracostomy 47.6%, chemical pleurodesis 74.4%, blob resection 71% and Heimlich valve application 50%. Chemical pleurodesis through the chest tube was the most effective method of treatment. The factor that was most predictive of recurrence was 'an air-leak of 7 days or more' at the first episode. (p=0.002) Conclusion: The patients who have a prolonged air-leak at the first episode of pneumothorax tend to have a higher incidence of recurrence. Further studies with more patients are necessary to determine the standard treatment protocol for secondary spontaneous pneumothorax.

A Study on the Design of Case-based Reasoning Office Knowledge Recommender System for Office Professionals (사례기반추론을 이용한 사무지식 추천시스템)

  • Kim, Myong-Ok;Na, Jung-Ah
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.131-146
    • /
    • 2011
  • It is becoming more essential than ever for office professionals to become competent in information collection/gathering and problem solving in today's global business society. In particular, office professionals do not only assist simple chores but are also forced to make decisions as quickly and efficiently as possible in problematic situations that can end in either profit or loss to their company. Since office professionals rely heavily on their tacit knowledge to solve problems that arise in everyday business situations, it is truly helpful and efficient to refer to similar business cases from the past and share or reuse such previous business knowledge for better performance results. Case-based reasoning(CBR) is a problem-solving method which utilizes previous similar cases to solve problems. Through CBR, the closest case to the current business situation can be searched and retrieved from the case or knowledge base and can be referred to for a new solution. This reduces the time and resources needed and increase success probability. The main purpose of this study is to design a system called COKRS(Case-based reasoning Office Knowledge Recommender System) and develop a prototype for it. COKRS manages cases and their meta data, accepts key words from the user and searches the casebase for the most similar past case to the input keyword, and communicates with users to collect information about the quality of the case provided and continuously apply the information to update values on the similarity table. Core concepts like system architecture, definition of a case, meta database, similarity table have been introduced, and also an algorithm to retrieve all similar cases from past work history has also been proposed. In this research, a case is best defined as a work experience in office administration. However, defining a case in office administration was not an easy task in reality. We surveyed 10 office professionals in order to get an idea of how to define a case in office administration and found out that in most cases any type of office work is to be recorded digitally and/or non-digitally. Therefore, we have defined a record or document case as for COKRS. Similarity table was composed of items of the result of job analysis for office professionals conducted in a previous research. Values between items of the similarity table were initially set to those from researchers' experiences and literature review. The results of this study could also be utilized in other areas of business for knowledge sharing wherever it is necessary and beneficial to share and learn from past experiences. We expect this research to be a reference for researchers and developers who are in this area or interested in office knowledge recommendation system based on CBR. Focus group interview(FGI) was conducted with ten administrative assistants carefully selected from various areas of business. They were given a chance to try out COKRS in an actual work setting and make some suggestions for future improvement. FGI has identified the user-interface for saving and searching cases for keywords as the most positive aspect of COKRS, and has identified the most urgently needed improvement as transforming tacit knowledge and knowhow into recorded documents more efficiently. Also, the focus group has mentioned that it is essential to secure enough support, encouragement, and reward from the company and promote positive attitude and atmosphere for knowledge sharing for everybody's benefit in the company.

Patients Setup Verification Tool for RT (PSVTS) : DRR, Simulation, Portal and Digital images (방사선치료 시 환자자세 검증을 위한 분석용 도구 개발)

  • Lee Suk;Seong Jinsil;Kwon Soo I1;Chu Sung Sil;Lee Chang Geol;Suh Chang Ok
    • Radiation Oncology Journal
    • /
    • v.21 no.1
    • /
    • pp.100-106
    • /
    • 2003
  • Purpose : To develop a patients' setup verification tool (PSVT) to verify the alignment of the machine and the target isocenters, and the reproduclbility of patients' setup for three dimensional conformal radiotherapy (3DCRT) and intensity modulated radiotherapy (IMRT). The utilization of this system is evaluated through phantom and patient case studies. Materials and methods : We developed and clinically tested a new method for patients' setup verification, using digitally reconstructed radiography (DRR), simulation, porial and digital images. The PSVT system was networked to a Pentium PC for the transmission of the acquired images to the PC for analysis. To verify the alignment of the machine and target isocenters, orthogonal pairs of simulation images were used as verification images. Errors in the isocenter alignment were measured by comparing the verification images with DRR of CT Images. Orthogonal films were taken of all the patients once a week. These verification films were compared with the DRR were used for the treatment setup. By performing this procedure every treatment, using humanoid phantom and patient cases, the errors of localization can be analyzed, with adjustments made from the translation. The reproducibility of the patients' setup was verified using portal and digital images. Results : The PSVT system was developed to verify the alignment of the machine and the target isocenters, and the reproducibility of the patients' setup for 3DCRT and IMRT. The results show that the localization errors are 0.8$\pm$0.2 mm (AP) and 1.0$\pm$0.3 mm (Lateral) in the cases relating to the brain and 1.1$\pm$0.5 mm (AP) and 1.0$\pm$0.6 mm (Lateral) in the cases relating to the pelvis. The reproducibility of the patients' setup was verified by visualization, using real-time image acquisition, leading to the practical utilization of our software Conclusions : A PSVT system was developed for the verification of the alignment between machine and the target isocenters, and the reproduclbility of the patients' setup in 3DCRT and IMRT. With adjustment of the completed GUI-based algorithm, and a good quality DRR image, our software may be used for clinical applications.

Seasonal Variation of Thermal Effluents Dispersion from Kori Nuclear Power Plant Derived from Satellite Data (위성영상을 이용한 고리원자력발전소 온배수 확산의 계절변동)

  • Ahn, Ji-Suk;Kim, Sang-Woo;Park, Myung-Hee;Hwang, Jae-Dong;Lim, Jin-Wook
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.17 no.4
    • /
    • pp.52-68
    • /
    • 2014
  • In this study, we investigated the seasonal variation of SST(Sea Surface Temperature) and thermal effluents estimated by using Landsat-7 ETM+ around the Kori Nuclear Power Plant for 10 years(2000~2010). Also, we analyzed the direction and range of thermal effluents dispersion by the tidal current and tide. The results are as follows, First, we figured out the algorithm to estimate SST through the linear regression analysis of Landsat DN(Digital Number) and NOAA SST. And then, the SST was verified by compared with the in situ measurement and NOAA SST. The determination coefficient is 0.97 and root mean square error is $1.05{\sim}1.24^{\circ}C$. Second, the SST distribution of Landsat-7 estimated by linear regression equation showed $12{\sim}13^{\circ}C$ in winter, $13{\sim}19^{\circ}C$ in spring, and $24{\sim}29^{\circ}C$ and $16{\sim}24^{\circ}C$ in summer and fall. The difference of between SST and thermal effluents temperature is $6{\sim}8^{\circ}C$ except for the summer season. The difference of SST is up to $2^{\circ}C$ in August. There is hardly any dispersion of thermal effluents in August. When it comes to the spread range of thermal effluents, the rise range of more than $1^{\circ}C$ in the sea surface temperature showed up to 7.56km from east to west and 8.43km from north to south. The maximum spread area was $11.65km^2$. It is expected that the findings of this study will be used as the foundational data for marine environment monitoring on the area around the nuclear power plant.

Compare the Clinical Tissue Dose Distributions to the Derived from the Energy Spectrum of 15 MV X Rays Linear Accelerator by Using the Transmitted Dose of Lead Filter (연(鉛)필터의 투과선량을 이용한 15 MV X선의 에너지스펙트럼 결정과 조직선량 비교)

  • Choi, Tae-Jin;Kim, Jin-Hee;Kim, Ok-Bae
    • Progress in Medical Physics
    • /
    • v.19 no.1
    • /
    • pp.80-88
    • /
    • 2008
  • Recent radiotherapy dose planning system (RTPS) generally adapted the kernel beam using the convolution method for computation of tissue dose. To get a depth and profile dose in a given depth concerened a given photon beam, the energy spectrum was reconstructed from the attenuation dose of transmission of filter through iterative numerical analysis. The experiments were performed with 15 MV X rays (Oncor, Siemens) and ionization chamber (0.125 cc, PTW) for measurements of filter transmitted dose. The energy spectrum of 15MV X-rays was determined from attenuated dose of lead filter transmission from 0.51 cm to 8.04 cm with energy interval 0.25 MeV. In the results, the peak flux revealed at 3.75 MeV and mean energy of 15 MV X rays was 4.639 MeV in this experiments. The results of transmitted dose of lead filter showed within 0.6% in average but maximum 2.5% discrepancy in a 5 cm thickness of lead filter. Since the tissue dose is highly depend on the its energy, the lateral dose are delivered from the lateral spread of energy fluence through flattening filter shape as tangent 0.075 and 0.125 which showed 4.211 MeV and 3.906 MeV. In this experiments, analyzed the energy spectrum has applied to obtain the percent depth dose of RTPS (XiO, Version 4.3.1, CMS). The generated percent depth dose from $6{\times}6cm^2$ of field to $30{\times}30cm^2$ showed very close to that of experimental measurement within 1 % discrepancy in average. The computed dose profile were within 1% discrepancy to measurement in field size $10{\times}10cm$, however, the large field sizes were obtained within 2% uncertainty. The resulting algorithm produced x-ray spectrum that match both quality and quantity with small discrepancy in this experiments.

  • PDF

Analysis of Genetics Problem-Solving Processes of High School Students with Different Learning Approaches (학습접근방식에 따른 고등학생들의 유전 문제 해결 과정 분석)

  • Lee, Shinyoung;Byun, Taejin
    • Journal of The Korean Association For Science Education
    • /
    • v.40 no.4
    • /
    • pp.385-398
    • /
    • 2020
  • This study aims to examine genetics problem-solving processes of high school students with different learning approaches. Two second graders in high school participated in a task that required solving the complicated pedigree problem. The participants had similar academic achievements in life science but one had a deep learning approach while the other had a surface learning approach. In order to analyze in depth the students' problem-solving processes, each student's problem-solving process was video-recorded, and each student conducted a think-aloud interview after solving the problem. Although students showed similar errors at the first trial in solving the problem, they showed different problem-solving process at the last trial. Student A who had a deep learning approach voluntarily solved the problem three times and demonstrated correct conceptual framing to the three constraints using rule-based reasoning in the last trial. Student A monitored the consistency between the data and her own pedigree, and reflected the problem-solving process in the check phase of the last trial in solving the problem. Student A's problem-solving process in the third trial resembled a successful problem-solving algorithm. However, student B who had a surface learning approach, involuntarily repeated solving the problem twice, and focused and used only part of the data due to her goal-oriented attitude to solve the problem in seeking for answers. Student B showed incorrect conceptual framing by memory-bank or arbitrary reasoning, and maintained her incorrect conceptual framing to the constraints in two problem-solving processes. These findings can help in understanding the problem-solving processes of students who have different learning approaches, allowing teachers to better support students with difficulties in accessing genetics problems.

A Study on the Effects of User Participation on Stickiness and Continued Use on Internet Community (인터넷 커뮤니티에서 사용자 참여가 밀착도와 지속적 이용의도에 미치는 영향)

  • Ko, Mi-Hyun;Kwon, Sun-Dong
    • Asia pacific journal of information systems
    • /
    • v.18 no.2
    • /
    • pp.41-72
    • /
    • 2008
  • The purpose of this study is the investigation of the effects of user participation, network effect, social influence, and usefulness on stickiness and continued use on Internet communities. In this research, stickiness refers to repeat visit and visit duration to an Internet community. Continued use means the willingness to continue to use an Internet community in the future. Internet community-based companies can earn money through selling the digital contents such as game, music, and avatar, advertizing on internet site, or offering an affiliate marketing. For such money making, stickiness and continued use of Internet users is much more important than the number of Internet users. We tried to answer following three questions. Fist, what is the effects of user participation on stickiness and continued use on Internet communities? Second, by what is user participation formed? Third, are network effect, social influence, and usefulness that was significant at prior research about technology acceptance model(TAM) still significant on internet communities? In this study, user participation, network effect, social influence, and usefulness are independent variables, stickiness is mediating variable, and continued use is dependent variable. Among independent variables, we are focused on user participation. User participation means that Internet user participates in the development of Internet community site (called mini-hompy or blog in Korea). User participation was studied from 1970 to 1997 at the research area of information system. But since 1997 when Internet started to spread to the public, user participation has hardly been studied. Given the importance of user participation at the success of Internet-based companies, it is very meaningful to study the research topic of user participation. To test the proposed model, we used a data set generated from the survey. The survey instrument was designed on the basis of a comprehensive literature review and interviews of experts, and was refined through several rounds of pretests, revisions, and pilot tests. The respondents of survey were the undergraduates and the graduate students who mainly used Internet communities. Data analysis was conducted using 217 respondents(response rate, 97.7 percent). We used structural equation modeling(SEM) implemented in partial least square(PLS). We chose PLS for two reason. First, our model has formative constructs. PLS uses components-based algorithm and can estimated formative constructs. Second, PLS is more appropriate when the research model is in an early stage of development. A review of the literature suggests that empirical tests of user participation is still sparse. The test of model was executed in the order of three research questions. First user participation had the direct effects on stickiness(${\beta}$=0.150, p<0.01) and continued use (${\beta}$=0.119, p<0.05). And user participation, as a partial mediation model, had a indirect effect on continued use mediated through stickiness (${\beta}$=0.007, p<0.05). Second, optional participation and prosuming participation significantly formed user participation. Optional participation, with a path magnitude as high as 0.986 (p<0.001), is a key determinant for the strength of user participation. Third, Network effect (${\beta}$=0.236, p<0.001). social influence (${\beta}$=0.135, p<0.05), and usefulness (${\beta}$=0.343, p<0.001) had directly significant impacts on stickiness. But network effect and social influence, as a full mediation model, had both indirectly significant impacts on continued use mediated through stickiness (${\beta}$=0.11, p<0.001, and ${\beta}$=0.063, p<0.05, respectively). Compared with this result, usefulness, as a partial mediation model, had a direct impact on continued use and a indirect impact on continued use mediated through stickiness. This study has three contributions. First this is the first empirical study showing that user participation is the significant driver of continued use. The researchers of information system have hardly studies user participation since late 1990s. And the researchers of marketing have studied a few lately. Second, this study enhanced the understanding of user participation. Up to recently, user participation has been studied from the bipolar viewpoint of participation v.s non-participation. Also, even the study on participation has been studied from the point of limited optional participation. But, this study proved the existence of prosuming participation to design and produce products or services, besides optional participation. And this study empirically proved that optional participation and prosuming participation were the key determinant for user participation. Third, our study compliments traditional studies of TAM. According prior literature about of TAM, the constructs of network effect, social influence, and usefulness had effects on the technology adoption. This study proved that these constructs still are significant on Internet communities.

Context Prediction Using Right and Wrong Patterns to Improve Sequential Matching Performance for More Accurate Dynamic Context-Aware Recommendation (보다 정확한 동적 상황인식 추천을 위해 정확 및 오류 패턴을 활용하여 순차적 매칭 성능이 개선된 상황 예측 방법)

  • Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.19 no.3
    • /
    • pp.51-67
    • /
    • 2009
  • Developing an agile recommender system for nomadic users has been regarded as a promising application in mobile and ubiquitous settings. To increase the quality of personalized recommendation in terms of accuracy and elapsed time, estimating future context of the user in a correct way is highly crucial. Traditionally, time series analysis and Makovian process have been adopted for such forecasting. However, these methods are not adequate in predicting context data, only because most of context data are represented as nominal scale. To resolve these limitations, the alignment-prediction algorithm has been suggested for context prediction, especially for future context from the low-level context. Recently, an ontological approach has been proposed for guided context prediction without context history. However, due to variety of context information, acquiring sufficient context prediction knowledge a priori is not easy in most of service domains. Hence, the purpose of this paper is to propose a novel context prediction methodology, which does not require a priori knowledge, and to increase accuracy and decrease elapsed time for service response. To do so, we have newly developed pattern-based context prediction approach. First of ail, a set of individual rules is derived from each context attribute using context history. Then a pattern consisted of results from reasoning individual rules, is developed for pattern learning. If at least one context property matches, say R, then regard the pattern as right. If the pattern is new, add right pattern, set the value of mismatched properties = 0, freq = 1 and w(R, 1). Otherwise, increase the frequency of the matched right pattern by 1 and then set w(R,freq). After finishing training, if the frequency is greater than a threshold value, then save the right pattern in knowledge base. On the other hand, if at least one context property matches, say W, then regard the pattern as wrong. If the pattern is new, modify the result into wrong answer, add right pattern, and set frequency to 1 and w(W, 1). Or, increase the matched wrong pattern's frequency by 1 and then set w(W, freq). After finishing training, if the frequency value is greater than a threshold level, then save the wrong pattern on the knowledge basis. Then, context prediction is performed with combinatorial rules as follows: first, identify current context. Second, find matched patterns from right patterns. If there is no pattern matched, then find a matching pattern from wrong patterns. If a matching pattern is not found, then choose one context property whose predictability is higher than that of any other properties. To show the feasibility of the methodology proposed in this paper, we collected actual context history from the travelers who had visited the largest amusement park in Korea. As a result, 400 context records were collected in 2009. Then we randomly selected 70% of the records as training data. The rest were selected as testing data. To examine the performance of the methodology, prediction accuracy and elapsed time were chosen as measures. We compared the performance with case-based reasoning and voting methods. Through a simulation test, we conclude that our methodology is clearly better than CBR and voting methods in terms of accuracy and elapsed time. This shows that the methodology is relatively valid and scalable. As a second round of the experiment, we compared a full model to a partial model. A full model indicates that right and wrong patterns are used for reasoning the future context. On the other hand, a partial model means that the reasoning is performed only with right patterns, which is generally adopted in the legacy alignment-prediction method. It turned out that a full model is better than a partial model in terms of the accuracy while partial model is better when considering elapsed time. As a last experiment, we took into our consideration potential privacy problems that might arise among the users. To mediate such concern, we excluded such context properties as date of tour and user profiles such as gender and age. The outcome shows that preserving privacy is endurable. Contributions of this paper are as follows: First, academically, we have improved sequential matching methods to predict accuracy and service time by considering individual rules of each context property and learning from wrong patterns. Second, the proposed method is found to be quite effective for privacy preserving applications, which are frequently required by B2C context-aware services; the privacy preserving system applying the proposed method successfully can also decrease elapsed time. Hence, the method is very practical in establishing privacy preserving context-aware services. Our future research issues taking into account some limitations in this paper can be summarized as follows. First, user acceptance or usability will be tested with actual users in order to prove the value of the prototype system. Second, we will apply the proposed method to more general application domains as this paper focused on tourism in amusement park.

Evaluation of Image Noise and Radiation Dose Analysis In Brain CT Using ASIR(Adaptive Statistical Iterative Reconstruction) (ASIR를 이용한 두부 CT의 영상 잡음 평가 및 피폭선량 분석)

  • Jang, Hyon-Chol;Kim, Kyeong-Keun;Cho, Jae-Hwan;Seo, Jeong-Min;Lee, Haeng-Ki
    • Journal of the Korean Society of Radiology
    • /
    • v.6 no.5
    • /
    • pp.357-363
    • /
    • 2012
  • The purpose of this study on head computed tomography scan corporate reorganization adaptive iteration algorithm using the statistical noise, and quality assessment, reduction of dose was evaluated. Head CT examinations do not apply ASIR group [A group], ASIR 50 applies a group [B group] were divided into examinations. B group of each 46.9 %, 48.2 %, 43.2 %, and 47.9 % the measured in the phantom research result of measurement of CT noise average were reduced more than A group in the central part (A) and peripheral unit (B, C, D). CT number was measured with the quantitive analytical method in the display-image quality evaluation and about noise was analyze. There was A group and difference which the image noise notes statistically between B. And A group was high so that the image noise could note than B group (31.87 HUs, 31.78 HUs, 26.6 HUs, 30.42 HU P<0.05). The score of the observer 1 of A group evaluated 73.17 on 74.2 at the result 80 half tone dot of evaluating by the qualitative evaluation method of the image by the bean curd clinical image evaluation table. And the score of the observer 1 of B group evaluated 71.77 on 72.47. There was no difference (P>0.05) noted statistically. And the inappropriate image was shown to the diagnosis. As to the exposure dose, by examination by applying ASIR 50 % there was no decline in quality of the image, 47.6 % could reduce the radiation dose. In conclusion, if ASIR is applied to the clinical part, it is considered with the dose written much more that examination is possible. And when examination, it is considered that it becomes the positive factor when the examiner determines.