• Title/Summary/Keyword: Compact method

Search Result 1,067, Processing Time 0.026 seconds

Establishment of Valve Replacement Registry and Risk Factor Analysis Based on Database Application Program (데이터베이스 프로그램에 기반한 심장판막 치환수술 환자의 레지스트리 확립 및 위험인자 분석)

  • Kim, Kyung-Hwan;Lee, Jae-Ik;Lim, Cheong;Ahn, Hyuk
    • Journal of Chest Surgery
    • /
    • v.35 no.3
    • /
    • pp.209-216
    • /
    • 2002
  • Background: Valvular heart disease is still the most common health problem in Korea. By the end of the year 1999, there has been 94,586 cases of open heart surgery since the first case in 1958. Among them, 36,247 cases were acquired heart diseases and 20,704 of those had valvular heart disease. But there was no database system and every surgeon and physician had great difficulties in analysing and utilizing those tremendous medical resources. Therefore, we developed a valve registry database program and utilize it for risk factor analysis and so on. Material and Method: Personal computer-based multiuser database program was created using Microsoft AccessTM. That consisted of relational database structure with fine-tuned compact field variables and server-client architecture. Simple graphic user interface showed easy-to-use accessability and comprehensibility. User-oriented modular structure enabled easier modification through native AccessTM functions. Infinite application of query function aided users to extract, summarize, analyse and report the study result promptly. Result: About three-thousand cases of valve replacement procedure were performed in our hospital from 1968 to 1999. Total number of prosthesis replaced was 3,700. The numbers of cases for mitral, aortic and tricuspid valve replacement were 1600, 584, 76, respectively. Among them, 700 patients received prosthesis in more than two positions. Bioprosthesis or mechanical prosthesis were used in 1,280 and 1,500 patients respectively Redo valve replacements were performed in 460 patients totally and 40 patients annually Conclusion: Database program for registry of valvular heart disease was successfully developed and used in personal computer-based multiuser environment. This revealed promising results and perspectives in database management and utilization system.

Histological Analysis of Autologous Pericardial Tissue Used as a Small-Diameter Arterial Graft (소구경 동맥이식편으로 사용한 자가심란의 조직학적 분식)

  • Yang Ji-Hyuk;Sung Sang-Hyun;Kim Won-Gon
    • Journal of Chest Surgery
    • /
    • v.39 no.4 s.261
    • /
    • pp.261-268
    • /
    • 2006
  • Background: Current vascular prostheses are still inadequate for reconstruction of small-diameter vessels. Autologous pericardium can be a good alternative for this purpose as it already possesses good blood compatibility and shows a mechanical behavior similar to that of natural arteries. However, the clinical use of autologous pericardial tissue as a small-diameter vascular graft has limitations due to mixed outcomes from uncertain biological behavior and difficulty to gain reliable patency results in animal experiments. To study this issue, we implanted fresh and glutaraldehyde-treated autologous pericardium as small-diameter arterial grafts in dogs, and compared their time-related changes histologically. Material and Method: As a form of 5mm-diameter arterial graft, one pair of autologous pericardial tissue was used for comparison between the glutaraldehyde-treated and the glutaraldehyde-untreated grafts in the bilateral carotid arteries in the same dog. The patency of the grafts were evaluated at regular intervals with Doppler ultrasonography. After the predetermined periods of 3 days, 2 weeks, 1 month, 3 months and 6 months, the grafts in each animal were explanted. The retrieved grafts were processed for light and electron microscopic analyses following gross observation. Result: Of 7 animals, 2 were excluded from the study because one died postoperatively due to bleeding and the other was documented as one side of the grafts being obstructed. All 10 grafts in the remaining 5 dogs were patent. Grossly, a variable degree of thromboses were observed in the luminal surfaces of the grafts at 3 days and 2 weeks, despite good patency. Pseudointimal smooth blood-contacting surfaces were developed in the grafts at f month and later. By light microscopy, mesothelial cell layers of the pericardial tissue were absent in all explanted grafts. Newly formed endothelial cell layers on the blood-contacting surface were observed in both the glutaraldehyde-treated and fresh grafts at 3 months and later. The collagen fibers became degraded by fragmentation in the fresh graft at 1 month and In the glutaraldehyde-treated graft at 3 months. At 6 months, the collagen layers were no longer visible in either the glutaraldehyde-treated or fresh grafts. By electron microscopy, a greater amount of coarse fibrin fibers were observed in the fresh grafts than in the glutaraldehyde-treated grafts and, more compact and well-arrayed layers were observed in the glutaraldehyde-treated grafts than in the fresh grafts. Conclusion: The glutaraldehyde-treated small-diameter pericardial arterial grafts showed a better endothelialization of the blood-contacting surface and a slower fragmentation of the collagen layers than the fresh grafts, although it has yet to be proven whether these differences are so significant as to affect the patency results between the groups.

Sensitivity Analysis for CAS500-4 Atmospheric Correction Using Simulated Images and Suggestion of the Use of Geostationary Satellite-based Atmospheric Parameters (모의영상을 이용한 농림위성 대기보정의 주요 파라미터 민감도 분석 및 타위성 산출물 활용 가능성 제시)

  • Kang, Yoojin;Cho, Dongjin;Han, Daehyeon;Im, Jungho;Lim, Joongbin;Oh, Kum-hui;Kwon, Eonhye
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1029-1042
    • /
    • 2021
  • As part of the next-generation Compact Advanced Satellite 500 (CAS500) project, CAS500-4 is scheduled to be launched in 2025 focusing on the remote sensing of agriculture and forestry. To obtain quantitative information on vegetation from satellite images, it is necessary to acquire surface reflectance through atmospheric correction. Thus, it is essential to develop an atmospheric correction method suitable for CAS500-4. Since the absorption and scattering characteristics in the atmosphere vary depending on the wavelength, it is needed to analyze the sensitivity of atmospheric correction parameters such as aerosol optical depth (AOD) and water vapor (WV) considering the wavelengths of CAS500-4. In addition, as CAS500-4 has only five channels (blue, green, red, red edge, and near-infrared), making it difficult to directly calculate key parameters for atmospheric correction, external parameter data should be used. Therefore, thisstudy performed a sensitivity analysis of the key parameters (AOD, WV, and O3) using the simulated images based on Sentinel-2 satellite data, which has similar wavelength specifications to CAS500-4, and examined the possibility of using the products of GEO-KOMPSAT-2A (GK2A) as atmospheric parameters. The sensitivity analysisshowed that AOD wasthe most important parameter with greater sensitivity in visible channels than in the near-infrared region. In particular, since AOD change of 20% causes about a 100% error rate in the blue channel surface reflectance in forests, a highly reliable AOD is needed to obtain accurate surface reflectance. The atmospherically corrected surface reflectance based on the GK2A AOD and WV was compared with the Sentinel-2 L2A reflectance data through the separability index of the known land cover pixels. The result showed that two corrected surface reflectance had similar Seperability index (SI) values, the atmospheric corrected surface reflectance based on the GK2A AOD showed higher SI than the Sentinel-2 L2A reflectance data in short-wavelength channels. Thus, it is judged that the parameters provided by GK2A can be fully utilized for atmospheric correction of the CAS500-4. The research findings will provide a basis for atmospheric correction of the CAS500-4 in the future.

An Optimization Study on a Low-temperature De-NOx Catalyst Coated on Metallic Monolith for Steel Plant Applications (제철소 적용을 위한 저온형 금속지지체 탈질 코팅촉매 최적화 연구)

  • Lee, Chul-Ho;Choi, Jae Hyung;Kim, Myeong Soo;Seo, Byeong Han;Kang, Cheul Hui;Lim, Dong-Ha
    • Clean Technology
    • /
    • v.27 no.4
    • /
    • pp.332-340
    • /
    • 2021
  • With the recent reinforcement of emission standards, it is necessary to make efforts to reduce NOx from air pollutant-emitting workplaces. The NOx reduction method mainly used in industrial facilities is selective catalytic reduction (SCR), and the most commercial SCR catalyst is the ceramic honeycomb catalyst. This study was carried out to reduce the NOx emitted from steel plants by applying De-NOx catalyst coated on metallic monolith. The De-NOx catalyst was synthesized through the optimized coating technique, and the coated catalyst was uniformly and strongly adhered onto the surface of the metallic monolith according to the air jet erosion and bending test. Due to the good thermal conductivity of metallic monolith, the De-NOx catalyst coated on metallic monolith showed good De-NOx efficiency at low temperatures (200 ~ 250 ℃). In addition, the optimal amount of catalyst coating on the metallic monolith surface was confirmed for the design of an economical catalyst. Based on these results, the De-NOx catalyst of commercial grade size was tested in a semi-pilot De-NOx performance facility under a simulated gas similar to the exhaust gas emitted from a steel plant. Even at a low temperature (200 ℃), it showed excellent performance satisfying the emission standard (less than 60 ppm). Therefore, the De-NOx catalyst coated metallic monolith has good physical and chemical properties and showed a good De-NOx efficiency even with the minimum amount of catalyst. Additionally, it was possible to compact and downsize the SCR reactor through the application of a high-density cell. Therefore, we suggest that the proposed De-NOx catalyst coated metallic monolith may be a good alternative De-NOx catalyst for industrial uses such as steel plants, thermal power plants, incineration plants ships, and construction machinery.

A Study on Evaluating the Possibility of Monitoring Ships of CAS500-1 Images Based on YOLO Algorithm: A Case Study of a Busan New Port and an Oakland Port in California (YOLO 알고리즘 기반 국토위성영상의 선박 모니터링 가능성 평가 연구: 부산 신항과 캘리포니아 오클랜드항을 대상으로)

  • Park, Sangchul;Park, Yeongbin;Jang, Soyeong;Kim, Tae-Ho
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1463-1478
    • /
    • 2022
  • Maritime transport accounts for 99.7% of the exports and imports of the Republic of Korea; therefore, developing a vessel monitoring system for efficient operation is of significant interest. Several studies have focused on tracking and monitoring vessel movements based on automatic identification system (AIS) data; however, ships without AIS have limited monitoring and tracking ability. High-resolution optical satellite images can provide the missing layer of information in AIS-based monitoring systems because they can identify non-AIS vessels and small ships over a wide range. Therefore, it is necessary to investigate vessel monitoring and small vessel classification systems using high-resolution optical satellite images. This study examined the possibility of developing ship monitoring systems using Compact Advanced Satellite 500-1 (CAS500-1) satellite images by first training a deep learning model using satellite image data and then performing detection in other images. To determine the effectiveness of the proposed method, the learning data was acquired from ships in the Yellow Sea and its major ports, and the detection model was established using the You Only Look Once (YOLO) algorithm. The ship detection performance was evaluated for a domestic and an international port. The results obtained using the detection model in ships in the anchorage and berth areas were compared with the ship classification information obtained using AIS, and an accuracy of 85.5% and 70% was achieved using domestic and international classification models, respectively. The results indicate that high-resolution satellite images can be used in mooring ships for vessel monitoring. The developed approach can potentially be used in vessel tracking and monitoring systems at major ports around the world if the accuracy of the detection model is improved through continuous learning data construction.

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF

Development of Standard Process for Private Information Protection of Medical Imaging Issuance (개인정보 보호를 위한 의료영상 발급 표준 업무절차 개발연구)

  • Park, Bum-Jin;Yoo, Beong-Gyu;Lee, Jong-Seok;Jeong, Jae-Ho;Son, Gi-Gyeong;Kang, Hee-Doo
    • Journal of radiological science and technology
    • /
    • v.32 no.3
    • /
    • pp.335-341
    • /
    • 2009
  • Purpose : The medical imaging issuance is changed from conventional film method to Digital Compact Disk solution because of development on IT technology. However other medical record department's are undergoing identification check through and through whereas medical imaging department cannot afford to do that. So, we examine present applicant's recognition of private intelligence safeguard, and medical imaging issuance condition by CD & DVD medium toward various medical facility and then perform comparative analysis associated with domestic and foreign law & recommendation, lastly suggest standard for medical imaging issuance and process relate with internal environment. Materials and methods : First, we surveyed issuance process & required documents when situation of medical image issuance in the metropolitan medical facility by wire telephone between 2008.6.1$\sim$2008.7.1. in accordance with the medical law Article 21$\sim$clause 2, suggested standard through applicant's required documents occasionally - (1) in the event of oneself $\rightarrow$ verifying identification, (2) in the event of family $\rightarrow$ verifying applicant identification & family relations document (health insurance card, attested copy, and so on), (3) third person or representative $\rightarrow$ verifying applicant identification & letter of attorney & certificate of one's seal impression. Second, also checked required documents of applicant in accordance with upper standard when situation of medical image issuance in Kyung-hee university medical center during 3 month 2008.5.1$\sim$2008.7.31. Third, developed a work process by triangular position of issuance procedure for situation when verifying required documents & management of unpreparedness. Result : Look all over the our manufactured output in the hospital - satisfy the all conditions $\rightarrow$ 4 place(12%), possibly request everyone $\rightarrow$ 4 place(12%), and apply in the clinic section $\rightarrow$ 9 place(27%) that does not medical imaging issuance office, so we don't know about required documents condition. and look into whether meet or not the applicant's required documents on upper 3month survey - satisfy the all conditions $\rightarrow$ 629 case(49%), prepare a one part $\rightarrow$ 416 case(33%), insufficiency of all document $\rightarrow$ 226case(18%). On the authority of upper research result, we are establishing the service model mapping for objective reception when image export situation through triangular position of issuance procedure and reduce of friction with patient and promote the patient convenience. Conclusion : The PACS is classified under medical machinery that mean indicates about higher importance of medical information therefore medical information administrator's who already received professional education & mind, are performer about issuance process only and also have to provide under ID checking process exhaustively.

  • PDF