• Title/Summary/Keyword: Repeated Processing

Search Result 294, Processing Time 0.025 seconds

A Document Collection Method for More Accurate Search Engine (정확도 높은 검색 엔진을 위한 문서 수집 방법)

  • Ha, Eun-Yong;Gwon, Hui-Yong;Hwang, Ho-Yeong
    • The KIPS Transactions:PartA
    • /
    • v.10A no.5
    • /
    • pp.469-478
    • /
    • 2003
  • Internet information search engines using web robots visit servers conneted to the Internet periodically or non-periodically. They extract and classify data collected according to their own method and construct their database, which are the basis of web information search engines. There procedure are repeated very frequently on the Web. Many search engine sites operate this processing strategically to become popular interneet portal sites which provede users ways how to information on the web. Web search engine contacts to thousands of thousands web servers and maintains its existed databases and navigates to get data about newly connected web servers. But these jobs are decided and conducted by search engines. They run web robots to collect data from web servers without knowledge on the states of web servers. Each search engine issues lots of requests and receives responses from web servers. This is one cause to increase internet traffic on the web. If each web server notify web robots about summary on its public documents and then each web robot runs collecting operations using this summary to the corresponding documents on the web servers, the unnecessary internet traffic is eliminated and also the accuracy of data on search engines will become higher. And the processing overhead concerned with web related jobs on web servers and search engines will become lower. In this paper, a monitoring system on the web server is designed and implemented, which monitors states of documents on the web server and summarizes changes of modified documents and sends the summary information to web robots which want to get documents from the web server. And an efficient web robot on the web search engine is also designed and implemented, which uses the notified summary and gets corresponding documents from the web servers and extracts index and updates its databases.

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF

Quantitative Indices of Small Heart According to Reconstruction Method of Myocardial Perfusion SPECT Using the 201Tl (201Tl을 이용한 심근관류 SPECT에서 재구성 방법에 따른 작은 용적 심장의 정량 지표 변화)

  • Kim, Sung Hwan;Ryu, Jae Kwang;Yoon, Soon Sang;Kim, Eun Hye
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.17 no.1
    • /
    • pp.18-24
    • /
    • 2013
  • Purpose: Myocardial perfusion SPECT using $^{201}Tl$ is an important method for viability of left ventricle and quantitative evaluation of cardiac function and now various reconstruction methods are used to improve the image quality. But in case of small sized heart, you should always be careful because of the Partial Volume Effect which may cause errors of quantitative indices at the reconstruction step. So, In this study, we compared those quantitative indices of left ventricle according to the reconstruction method of myocardial perfusion SPECT with the Echocardiography and verified the degree of the differences between them. Materials and Methods: Based on ESV 30 mL of Echocardiography, we divided 278 patients (male;98, female;188, Mean age;$65.5{\pm}11.1$) who visited the Asan medical center from February to September, 2012 into two categories; below the criteria to small sized heart, otherwise, normal or large sized heart. Filtered and output each case, we applied the method of FBP and OSEM to each of them, and calculated EDV, ESV and LVEF, and we conducted statistical processing through Repeated Measures ANOVA with indices that measured in Echocardiography. Results: In case of men and women, there were no significant difference in EDV between FBP and OSEM (p=0.053, p=0.098), but in case of Echocardiography, there were meaningful differences (p<0.001). The change of ESV especially women in small sized heard, significant differences has occurred among FBP, OSEM and Echocardiography. Also, in LVEF, there were no difference in men and women who have normal sized heart among FBP, OSEM and Echocardiography (p=0.375, p=0.969), but the women with small sized heart have showed significant differences (p<0.001). Conclusion: The change in quantitative indices of left ventricle between Nuclear cardiology image reconstruction, no difference has occurred in the patients with normal sized heart but based on ESV, under 30 mL of small sized heart, especially in female, there were significant differences in FBP, OSEM and Echocardiography. We found out that overestimated LVEF caused by PVE can be reduced in average by applying OSEM to all kinds of gamma camera, which are used in analyzing the differences.

  • PDF

The Performance Analysis of GPU-based Cloth simulation according to the Change of Work Group Configuration (워크 그룹 구성 변화에 따른 GPU 기반 천 시뮬레이션의 성능 분석)

  • Choi, Young-Hwan;Hong, Min;Lee, Seung-Hyun;Choi, Yoo-Joo
    • Journal of Internet Computing and Services
    • /
    • v.18 no.3
    • /
    • pp.29-36
    • /
    • 2017
  • In these days, 3D dynamic simulation is closely related to many industries. In the past, physically-based 3D simulation was used mainly in the car crash or construction related fields, but it also plays an important role in movies or games today. Many mathematical computations are needed to represent the 3D object realistically, but it is difficult to process a large amount of calculations for simulation of application based on CPU in real-time. Recently, with the advanced graphic hardware and improved architecture, GPU can be utilized for the general purposes of computation function as well as graphic computation. Many approaches using GPU have been applied for various research fields. In this paper, we analyze the performance variation of two cloth simulation algorithms based on GPU according to the change of execution properties of GPU shaders in oder to optimize the performance of GPU-based cloth simulation. Cloth simulation is implemented by the spring centric algorithm and node centric algorithm with GPU parallel computing using compute shader of GLSL 4.3. We compare the performance of between these algorithms according to the change of the size and dimension of work group. The experiment is repeated to 10 times during 5,000 frames for each test and experimental results are provided by averaging of FPS. The experimental result shows that the node centric algorithm is executed in higher speed than the spring centric algorithm.

Secure Routing Mechanism using one-time digital signature in Ad-hoc Networks (애드혹 네트워크에서의 one-time 전자 서명을 이용한 라우팅 보안 메커니즘)

  • Pyeon, Hye-Jin;Doh, In-Shil;Chae, Ki-Joon
    • The KIPS Transactions:PartC
    • /
    • v.12C no.5 s.101
    • /
    • pp.623-632
    • /
    • 2005
  • In ad-hoc network, there is no fixed infrastructure such as base stations or mobile switching centers. The security of ad-hoc network is more vulnerable than traditional networks because of the basic characteristics of ad-hoc network, and current muting protocols for ad-hoc networks allow many different types of attacks by malicious nodes. Malicious nodes can disrupt the correct functioning of a routing protocol by modifying routing information, by fabricating false routing information and by impersonating other nodes. We propose a routing suity mechanism based on one-time digital signature. In our proposal, we use one-time digital signatures based on one-way hash functions in order to limit or prevent attacks of malicious nodes. For the purpose of generating and keeping a large number of public key sets, we derive multiple sets of the keys from hash chains by repeated hashing of the public key elements in the first set. After that, each node publishes its own public keys, broadcasts routing message including one-time digital signature during route discovery and route setup. This mechanism provides authentication and message integrity and prevents attacks from malicious nodes. Simulation results indicate that our mechanism increases the routing overhead in a highly mobile environment, but provides great security in the route discovery process and increases the network efficiency.

A Study on Electrolysis of Heavy Water and Interaction of Hydrogen with Lattice Defects in Palladium Electrodes (팔라디움전극에서 중수소의 전기분해와 수소와 격자결함의 반응에 관한 연구)

  • Ko, Won-Il;Yoon, Young-Ku;Park, Yong-Ki
    • Nuclear Engineering and Technology
    • /
    • v.24 no.2
    • /
    • pp.141-153
    • /
    • 1992
  • Excess tritium analysis was peformed to verify whether or not cold fusion occurs during electrolysis of heavy water in the current density range of 83~600 mA/$\textrm{cm}^2$ for a period of 24 ~ 48 hours with use of palladium electrodes of seven different processing treatments and geometries. The extent of recombination of D$_2$ and $O_2$gases in the electrolytic cell was measured for the calculation of accurate enthaplpy values. The behavior and interaction of hydrogen atoms with defects in Pd electrodes were examined using the Sieverts gas charging and the positron annihilation(PA) method. Slight enrichment of tritium observed was attributed to electrolytic enrichment but not to the formation of a by-product of cold fusion. The extent of recombination of D$_2$and $O_2$gases was 32%. Hence the excess heat measured during the electrolysis was considered to be due to the exothermic reaction of recombination but not to nuclear fusion. Lifetime results from the PA measurements on the Pd electrodes indicated that hydrogen atoms could be trapped at dislocations and vacancies in the electrodes and that dislocations were slightly more preferred sites than vacancies. It was also inferred from R parameters that the formation of hydrides was accompanied by generation of mostly dislocations. Doppler broadening results of the Pd electrodes indicated that lattiec defect sites where positrons were trapped first increased and then decreased, and this cycle was repeated as electrolysis continued. It can be inferred from PA measurements on the cold-rolled Pd and the isochronally annealed Pd hydride specimens that microvoid-type defects existed in the hydrogen-charged electrode specimen.

  • PDF

Study on the Cases and Features of Chair Design Inducing the Participation of Users - Focused on the cases of chair design from 1966 up to now - (사용자 참여를 유도하는 의자디자인의 사례와 특성에 관한 연구 - 1966년부터 현재까지 디자인된 의자디자인의 사례를 중심으로 -)

  • Kim, Jin-Woo
    • Korean Institute of Interior Design Journal
    • /
    • v.16 no.2 s.61
    • /
    • pp.262-269
    • /
    • 2007
  • The blur phenomenon obscuring the boundary between the field of designers and that of users may be the key paradigm in the 21st century. However, we observed a number of chair design cases that could be considered as the results of blur pheonomenon in the furniture design field from mid 1960s. The backgrounds include the repulsion against the uniform functionalism, deliberation on the life in the future and the development of plastic materials and their processing methods. Under such backgrounds, the designers pursued the new and futuristic furniture design. In that process, what is about the "freedom" that the consumers as well as the designers should have in using the furniture was the important concept. This concept enabled the creation of chair design inducing the participation of consumers. They created various kinds of shapes, functions and structures that the consumers became interested in as if they had fun with toy blocks by mainly using the new material "plastic". In a formative aspect, the entire shape is classified into the organic shape and geometric shape. The unit types are divided into two kinds; type that the unit of simple shape is repeated only with size difference and irregular combination type of the units comprised of more than two shapes. In the functional aspect, some cases showed the transformation and expansion of the function more variously. Other cases changed the function of chairs to tables, cabinets, or objects. In the structural aspect, on the basis of the method assembling each unit, one method is to assemble using the hardware and the other is to assemble only with intrinsic units of chair. The chair design created by the blur phenomenon between the designers and the users as described above causes the blur phenomenon between the furniture and the space where the furniture is installed. Accordingly, it is expanding the furniture design sphere including the case that the furniture is not selected as the rifle article depending on the characteristics of interior space but it becomes the element leading the characteristics of space. This study aims to estimate the change of interior space and the furniture that my cause the blur phenomenon by examining the cases above appropriate for the paradigm of the 21st century. Furthermore, this study will enable the discussion oil the directions of future furniture design based on its results.

An Automated Sharing Scheme of CAD Tools and License Resources Based on Directory Service (디렉토리 서비스에 기반한 CAD 툴 및 라이센스 자원의 자동화된 공유 방식)

  • Jung Sung-Heon;Yim In-Sung;Jhang Kyoung-Son
    • The KIPS Transactions:PartA
    • /
    • v.13A no.1 s.98
    • /
    • pp.27-34
    • /
    • 2006
  • Designers should know CAD tool/license information such as available number of licenses or tools, types, and configuration methods to use CAD tools properly in their group. Usually, this information is provided by managers who administrate CAD tool and license servers in the specific design group. In the previous CAD tool/licenses sharing methods, designers have to get CAD tool/license information manually and setup the environments with their own hands. n a new designer comes into the design working group, the designer wastes unnecessary design time, because of these processes. As a result, designer's poductivity and utilization of CAD tools will decrease Besides, managers also waste their time and effort, since they should provide CAD tool/license information manually to each designer. In this paper, we present an automated scheme to share CAD tool/license information based on directory service. The proposed methods automate the communication processes between managers and designers that minimze unnecessary time to share CAD information. In addition, as this system maintains the consistency of CAD information automatically that remove the repeated work of managers and designers, it improves the enciency of sharing. Besides, it automates the license configuration steps using prof(y Last, we offer Executable Proxy which is proper for thin CAD application. We believe this scheme will reduce designers and managers' time and effort as well as maximize the utilization of CAD tools.

A Study on Software Fault Analysis and Management Method using Defect Tracking System (결함 추적 시스템에 의한 소프트웨어 결함 분석 및 관리기법 연구)

  • Joon, Moon-Young;Yul, Rhew-Sung
    • The KIPS Transactions:PartD
    • /
    • v.15D no.3
    • /
    • pp.321-326
    • /
    • 2008
  • The software defects that are not found in the course of a project frequently appear during the conduct of the maintenance procedure after the complete development of the software. As the frequency of surfacing of defects during the maintenance procedure increases, the cost likewise increases, and the quality and customer reliability decreases. The defect rate will go down only if cause analysis and process improvement are constantly performed. This study embodies the defect tracking system (DTS) by considering the Pareto principle: that most defects are repetitions of defects that have previously occurred. Based on the records of previously occurring defects found during the conduct of a maintenance procedure, DTS tracks the causes of the software defects and provides the developer, operator, and maintenance engineer with the basic data for the improvement of the software concerned so that the defect will no longer be manifested or repeated. The basic function of DTS is to analyze the defect type, provide the measurement index for it, and aggregate the program defect type. Doing these will pave the way for the full correction of all the defects of a software as it will enable the defect correction team to check the measured defect type. When DTS was applied in the software configuration management system of the W company, around 65% of all its software defects were corrected.

Measurement of MRI Monitor Luminance and MRI Room Illuminance with a Light Probe (Light Probe를 이용한 MRI 검사실 및 모니터의 조도와 휘도 측정)

  • Kim, Ji Min;Han, Ah Yung;Lee, Ha Young;Lee, So Ra;Kweon, Dae Cheol
    • Journal of the Korean Magnetics Society
    • /
    • v.26 no.5
    • /
    • pp.168-172
    • /
    • 2016
  • The purpose of the optimal environment of the MRI room to measured luminance and illuminance of the MRI room and the monitor. University Hospital (n = 6) of the MRI (n = 10) in the luminance and illuminance Light Probe Xi Unfors (Unfors Instruments AB, Billdal, Sweden) was measured by using the. Black luminance level and white level of illuminance is repeated three times in the middle of the side of the monitor to obtain the mean and standard deviation using a t-test statistical processing was of significance test. Monitor luminance and black level in the average $1.78cd/m^2$, the standard deviation was $0.85cd/m^2$, white level average of $43.58cd/m^2$, the standard deviation of $13.19cd/m^2$. Illuminance of MRI room was the lowest value measured in accordance with the 30.5 lux, the maximum value is 601.3 lux, mean was measured by a variety of 177.86 lux. Luminance and illuminance of the MRI room and monitor is found to have statistically significant difference (p < .05). In conclusion, refer to the recommended standard of MRI and room monitor luminance and illuminance and to create an optimal environment.