• 제목/요약/키워드: high data-rate system

검색결과 1,763건 처리시간 0.036초

e-비즈니스가 경영성과에 미치는 영향 -홈쇼핑을 중심으로- (The Effects of e-Business on Business Performance - In the home-shopping industry -)

  • 김세중;안선숙
    • 경영과정보연구
    • /
    • 제22권
    • /
    • pp.137-165
    • /
    • 2007
  • It seems high time to increase productivity by adopting e-business to overcome challenges posed by both external factors including the appreciation of Korean won, oil hikes and fierce global competition and domestic issues represented by disparities between large corporations and small and medium enterprises (SMEs), Seoul metropolitan and local cities, and export and domestic demand all of which weaken future growth engines in the Korean economy. The demands of the globalization era are for innovative changes in businessprocess and industrial structure aiming for creating new values. To this end, e-business is expected to play a core role in the sophistication of the Korean economy through new values and innovation. In order to examine business performance in e-business-adopting industries, this study analyzed the home shopping industry by closely looking into the financial ratios including the ratio of net profit to sales, the ratio of operation income to sales, the ratio of gross cost to sales cost, the ratio of gross cost to selling, general and administrative (SG&A) expense, and return of investment (ROI). This study, for best outcome, referred to corporate financial statements as a main resource to calculate financial ratios by utilizing Data Analysis, Retrieval and Transfer System (DART) of the Financial Supervisory Service, one of the Korea's financial supervisory authorities. First of all, the result of the trend analysis on the ratio of net profit to sales is as following. CJ Home Shopping has registered a remarkable increase in its ratio of net profit rate to sales since 2002 while its competitors find it hard to catch up with CJ's stunning performances. This is partly due to the efficient management compared to CJ's value of capital. Such significance, if the current trend continues, will make the front-runner assume the largest market share. On the other hand, GS Home Shopping, despite its best organized system and largest value of capital among others, lacks efficiency in management. Second of all, the result of the trend analysis on the ratio of operation income to sales is as following. Both CJ Home Shopping and GS Home Shopping have, until 2004, recorded similar growth trend. However, while CJ Home Shopping's operating income continued to increase in 2005, GS Home Shopping observed its operating income declining which resulted in the increasing income gap with CJ Home Shopping. While CJ Home Shopping with the largest market share in home shopping industryis engaged in aggressive marketing, GS Home Shopping due to its stability-driven management strategies falls behind CJ again in the ratio of operation income to sales in spite of its favorable management environment including its large capital. Companies in the Group B were established in the same year of 2001. NS Home Shopping was the first in the Group B to shift its loss to profit. Woori Home Shopping has continued to post operating loss for three consecutive years and finally was sold to Lotte Group in 2007, but since then, has registered a continuing increase in net income on sales. Third of all, the result of the trend analysis on the ratio of gross cost to sales cost is as following. Since home shopping falls into sales business, its cost of sales is much lower than that of other types of business such as manufacturing industry. Since 2002 in gross costs including cost of sales, SG&A expense, and non-operating expense, cost of sales turned out to have remarkably decreased. Group B has also posted a notable decline in the same sector since 2002. Fourth of all, the result of the trend analysis on the ratio of gross cost to SG&A expense is as following. Due to its unique characteristics, the home shopping industry usually posts ahigh ratio of SG&A expense. However, more than 80% of SG&A expense means the result of lax management and at the same time, a sharp lower net income on sales than other industries. Last but not least, the result of the trend analysis on ROI is as following. As for CJ Home Shopping, the curve of ROI looks similar to that of its investment on fixed assets. As it turned out, the company's ratio of fixed assets to operating income skyrocketed in 2004 and 2005. As far as GS Home Shopping is concerned, its fixed assets are not as much as that of CJ Home Shopping. Consequently, competition in the home shopping industry, at the moment, is among CJ, GS, Hyundai, NS and Woori Home Shoppings, and all of them need to more thoroughly manage their costs. In order for the late-comers of Group B and other home shopping companies to advance further, the current lax management should be reformed particularly on their SG&A expense sector. Provided that the total sales volume in the Internet shopping sector is projected to grow over 20 trillion won by the year 2010, it is concluded that all the participants in the home shopping industry should put strategies on efficient management on costs and expenses as their top priority rather than increase revenues, if they hope to grow even further after 2007.

  • PDF

인터넷 커뮤니티에서 사용자 참여가 밀착도와 지속적 이용의도에 미치는 영향 (A Study on the Effects of User Participation on Stickiness and Continued Use on Internet Community)

  • 고미현;권순동
    • Asia pacific journal of information systems
    • /
    • 제18권2호
    • /
    • pp.41-72
    • /
    • 2008
  • The purpose of this study is the investigation of the effects of user participation, network effect, social influence, and usefulness on stickiness and continued use on Internet communities. In this research, stickiness refers to repeat visit and visit duration to an Internet community. Continued use means the willingness to continue to use an Internet community in the future. Internet community-based companies can earn money through selling the digital contents such as game, music, and avatar, advertizing on internet site, or offering an affiliate marketing. For such money making, stickiness and continued use of Internet users is much more important than the number of Internet users. We tried to answer following three questions. Fist, what is the effects of user participation on stickiness and continued use on Internet communities? Second, by what is user participation formed? Third, are network effect, social influence, and usefulness that was significant at prior research about technology acceptance model(TAM) still significant on internet communities? In this study, user participation, network effect, social influence, and usefulness are independent variables, stickiness is mediating variable, and continued use is dependent variable. Among independent variables, we are focused on user participation. User participation means that Internet user participates in the development of Internet community site (called mini-hompy or blog in Korea). User participation was studied from 1970 to 1997 at the research area of information system. But since 1997 when Internet started to spread to the public, user participation has hardly been studied. Given the importance of user participation at the success of Internet-based companies, it is very meaningful to study the research topic of user participation. To test the proposed model, we used a data set generated from the survey. The survey instrument was designed on the basis of a comprehensive literature review and interviews of experts, and was refined through several rounds of pretests, revisions, and pilot tests. The respondents of survey were the undergraduates and the graduate students who mainly used Internet communities. Data analysis was conducted using 217 respondents(response rate, 97.7 percent). We used structural equation modeling(SEM) implemented in partial least square(PLS). We chose PLS for two reason. First, our model has formative constructs. PLS uses components-based algorithm and can estimated formative constructs. Second, PLS is more appropriate when the research model is in an early stage of development. A review of the literature suggests that empirical tests of user participation is still sparse. The test of model was executed in the order of three research questions. First user participation had the direct effects on stickiness(${\beta}$=0.150, p<0.01) and continued use (${\beta}$=0.119, p<0.05). And user participation, as a partial mediation model, had a indirect effect on continued use mediated through stickiness (${\beta}$=0.007, p<0.05). Second, optional participation and prosuming participation significantly formed user participation. Optional participation, with a path magnitude as high as 0.986 (p<0.001), is a key determinant for the strength of user participation. Third, Network effect (${\beta}$=0.236, p<0.001). social influence (${\beta}$=0.135, p<0.05), and usefulness (${\beta}$=0.343, p<0.001) had directly significant impacts on stickiness. But network effect and social influence, as a full mediation model, had both indirectly significant impacts on continued use mediated through stickiness (${\beta}$=0.11, p<0.001, and ${\beta}$=0.063, p<0.05, respectively). Compared with this result, usefulness, as a partial mediation model, had a direct impact on continued use and a indirect impact on continued use mediated through stickiness. This study has three contributions. First this is the first empirical study showing that user participation is the significant driver of continued use. The researchers of information system have hardly studies user participation since late 1990s. And the researchers of marketing have studied a few lately. Second, this study enhanced the understanding of user participation. Up to recently, user participation has been studied from the bipolar viewpoint of participation v.s non-participation. Also, even the study on participation has been studied from the point of limited optional participation. But, this study proved the existence of prosuming participation to design and produce products or services, besides optional participation. And this study empirically proved that optional participation and prosuming participation were the key determinant for user participation. Third, our study compliments traditional studies of TAM. According prior literature about of TAM, the constructs of network effect, social influence, and usefulness had effects on the technology adoption. This study proved that these constructs still are significant on Internet communities.

SLA 표면 처리 및 외측 연결형의 국산 임플랜트에 대한 임상적, 방사선학적 평가 (Clinical and radiographic evaluation of $Neoplan^{(R)}$ implant with a sandblasted and acid-etched surface and external connection)

  • 안희석;문홍석;심준성;조규성;이근우
    • 대한치과보철학회지
    • /
    • 제46권2호
    • /
    • pp.125-136
    • /
    • 2008
  • 골유착 개념에 기반한 치과용 임플랜트가 $Br{\aa}nemark$ 등에 의해 도입된 이후로 치과 치료에서 임플랜트를 이용한 방법은 장기적으로 높은 성공률을 보여 왔다. 임플랜트를 이용한 치료법이 상실된 치아의 수복을 위해 우선적으로 고려되어야 할 중요한 방법으로 인식되면서 임플랜트를 이용한 방법을 선호하게 되었고 적용 범위 및 사용 빈도도 급증하였다. 예전에 비해서 국산 임플랜트의 사용도 증가하였으나 장기간의 임상적, 객관적인 자료를 가진 국산 임플랜트의 수는 많지 않은 상태이다. 본 연구는 SLA 표면 처리 및 외측 연결형의 국산 임플랜트에 관하여 18개월에서부터 57개월까지의 임상적, 방사선학적 결과에 대한 후향적 분석을 실시하였다. 연세대학교 치과대학병원에서 네오플란트$^{(R)}$ 임플랜트 (네오바이오텍, 서울, 한국)를 이용하여 치료받은 25명의 환자에게 식립된 96개의 임플랜트를 대상으로 하였고, 대상자 중 남성의 평균 연령은 63.5세, 여성의 평균 연령은 44.3세였다. 진료기록부를 통해 성별, 연령, 무치악 유형, 식립 위치, 식립된 임플랜트의 직경 및 길이, 2차 수술 여부, 보철물의 유형, 대합치의 유형, 임상적 합병증의 종류 및 빈도 등을 조사하여 그에 따른 분포 및 생존율의 차이와 함께 이들 항목이 변연골 흡수량에 미치는 영향을 조사하여 다음과 같은 결과를 얻었다. 1. 총 25명에게 식립된 96개의 임플랜트 중 2개가 실패하여 누적 생존율은 97.9%로 나타났다. 2. 정기검진이 가능했던 88개의 임플랜트에 대해서는 상악에서의 생존율이 96.2%, 하악에서의 생존율은 98.4%였고, 구치부에서의 생존율은 97.5%였으며 전치부에서의 생존율은 100%였다. 3. 보철물 장착 후 1년과 1년 이후의 연간 흡수량에서 남성이 여성보다 변연골 흡수량이 많았다 (P<0.05). 4. 임플랜트 지지 보철물 후방에 자연치가 존재하는 경우가 존재하지 않는 경우보다 보철물 장착 후 첫 1년과 1년 이후 모두에서 연간 흡수량이 적었다 (P<0.05). 5. 보철물 장착 1년 이후의 연간 흡수량은 전치보다 구치에서 더 많은 변연골 흡수를 보였다 (P<0.05). 6. 악궁 간, 보철물의 유형, 대합치의 유형, 2차 수술 여부에 따른 변연골 흡수량의 차이는 보이지 않았다 (P>0.05). 이상의 결과를 토대로 변연골 흡수량에 영향을 주는 요소로 성별, 무치악의 유형, 악궁 내 위치였으며, 악궁 간, 보철물의 유형, 대합치의 유형, 2차 수술 여부에 따른 변연골 흡수량 차이는 없었다. 본 연구에서 최대 57개월까지의 기간 동안 SLA 표면 처리 및 외측 연결형의 국산 임플랜트의 임상적인 성공률은 만족스러운 결과를 보였으며 변연골 흡수량도 임플랜트 성공기준에 부합하였으나, 이보다 더 장기적인 평가가 필요하며 다양한 국산 임플랜트 시스템에 대한 중장기적인 연구가 지속되어야 할 것이다.

대학생의 학창경험이 사회 진출에 미치는 영향: 대학생활 활동 로그분석을 중심으로 (School Experiences and the Next Gate Path : An analysis of Univ. Student activity log)

  • 이은주;박도형
    • 지능정보연구
    • /
    • 제26권4호
    • /
    • pp.149-171
    • /
    • 2020
  • 대학생 시기는 실질적으로 직업선택을 해야 하는 시기이다. 우리 사회가 빠르게 고도로 발달하는 만큼, 직업은 다양화, 세분화, 전문화되어 대학생들의 취업 준비기간은 또한 갈수록 길어지고 있다. 본 연구는 대학생들이 학교 내외에서 하는 경험하는 다양한 활동들이 취업에 어떤 영향이 있을지 대학생들의 로그데이터를 중심으로 분석해 보았다. 실험을 위하여 학생들의 다양한 활동을 체계적으로 분류하고 활동 데이터를 6개의 핵심역량(직무전문성강화 역량, 리더십 및 팀웍 역량, 세계화 역량, 직무몰입 역량, 직업탐색 역량, 자율이행역량)으로 구분하였고, 여기서 구분된 6개의 역량 값이 취업여부(취업그룹, 미취업그룹)에 미치는 영향을 분석하였다. 분석 결과 6개의 역량 모두 취업집단과 미취업집단의 수준차이가 유의한 것을 확인할 수 있어 학교에서의 활동은 취업에 유의미함을 유추할 수 있었다. 다음으로 6개의 역량이 취업의 질적성과에 미치는 영향을 분석하기 위하여 6개의 역량수준을 상·하로 나누고, 첫연봉액을 기준으로 6개의 그룹을 만든 후 관계를 확인해 보았는데, 그 결과 6개의 역량 중 세계화역량, 직업탐색역량, 자율이행역량 수준이 높은 학생이 연봉을 기준으로 한 취업성과 또한 높은 것으로 확인되었다. 본 연구의 이론적 공헌은 다음과 같다. 첫 번째, 학창경험으로부터 추출할 수 있는 역량을 인사조직관리분야의 역량과 연결하며, 개인의 경력성공을 위해 대학생으로서 필요한 역량을 직업탐색역량과 자율이행역량을 추가하였다는 점이다. 두 번째, 활동로그의 실데이터 기반으로 각각의 역량을 측정하고 결과변수와 검증을 한 점이다. 세 번째, 양적성과(취업률)뿐만 아니라 질적성과(연봉수준)를 분석한 점이다. 본 연구의 실무적 활용은 다음과 같다. 첫 번째, 대학생들의 경력개발계획 수립 시 가이드가 될 수 있다. 전략이 없거나 균형을 갖추지 못한 또는 과도한 스펙을 쌓기는 지양하고 직업세계와 직무에 대한 분석을 바탕으로 자신의 강점을 표현할 수 있는 취업준비가 필요하다. 두 번째, 학교와 기업, 지자체, 정부 등 대학생들을 위한 행사를 기획하는 담당자는 대학생들이 필요로 하는 경험을 설계할 본 연구에서 제시한 6대 역량을 참고할 수 있다. 이벤트의 수요자인 대학생이 필요한 역량을 키우면서 하면서 각 기관의 목적을 더할 때 수요자와 공급자 모두 만족스러운 결과를 만들 수 있다. 세 번째, 디지털 대전환 시대, 국가의 균형발전을 구상하는 정부의 정책담당자는 대학생들의 호기심과 에너지를 대학생들의 역량개발과 국가의 균형발전을 함께 성취하는 방향으로 정책을 만들 수 있다. 기존에 없던 플랫폼서비스를 시도하고, 기존의 아날로그 상품이나 서비스와 기업문화를 디지털화 하는 데에는 많은 인력이 필요하며 디지털세대인 현 대학생들의 활약은 전 산업에서 촉매가 될 뿐 아니라 성공적인 경력개발을 위한 대학생들에게도 필요한 경험이라 사료된다.

THE CURRENT STATUS OF BIOMEDICAL ENGINEERING IN THE USA

  • Webster, John G.
    • 대한의용생체공학회:학술대회논문집
    • /
    • 대한의용생체공학회 1992년도 춘계학술대회
    • /
    • pp.27-47
    • /
    • 1992
  • Engineers have developed new instruments that aid in diagnosis and therapy Ultrasonic imaging has provided a nondamaging method of imaging internal organs. A complex transducer emits ultrasonic waves at many angles and reconstructs a map of internal anatomy and also velocities of blood in vessels. Fast computed tomography permits reconstruction of the 3-dimensional anatomy and perfusion of the heart at 20-Hz rates. Positron emission tomography uses certain isotopes that produce positrons that react with electrons to simultaneously emit two gamma rays in opposite directions. It locates the region of origin by using a ring of discrete scintillation detectors, each in electronic coincidence with an opposing detector. In magnetic resonance imaging, the patient is placed in a very strong magnetic field. The precessing of the hydrogen atoms is perturbed by an interrogating field to yield two-dimensional images of soft tissue having exceptional clarity. As an alternative to radiology image processing, film archiving, and retrieval, picture archiving and communication systems (PACS) are being implemented. Images from computed radiography, magnetic resonance imaging (MRI), nuclear medicine, and ultrasound are digitized, transmitted, and stored in computers for retrieval at distributed work stations. In electrical impedance tomography, electrodes are placed around the thorax. 50-kHz current is injected between two electrodes and voltages are measured on all other electrodes. A computer processes the data to yield an image of the resistivity of a 2-dimensional slice of the thorax. During fetal monitoring, a corkscrew electrode is screwed into the fetal scalp to measure the fetal electrocardiogram. Correlations with uterine contractions yield information on the status of the fetus during delivery To measure cardiac output by thermodilution, cold saline is injected into the right atrium. A thermistor in the right pulmonary artery yields temperature measurements, from which we can calculate cardiac output. In impedance cardiography, we measure the changes in electrical impedance as the heart ejects blood into the arteries. Motion artifacts are large, so signal averaging is useful during monitoring. An intraarterial blood gas monitoring system permits monitoring in real time. Light is sent down optical fibers inserted into the radial artery, where it is absorbed by dyes, which reemit the light at a different wavelength. The emitted light travels up optical fibers where an external instrument determines O2, CO2, and pH. Therapeutic devices include the electrosurgical unit. A high-frequency electric arc is drawn between the knife and the tissue. The arc cuts and the heat coagulates, thus preventing blood loss. Hyperthermia has demonstrated antitumor effects in patients in whom all conventional modes of therapy have failed. Methods of raising tumor temperature include focused ultrasound, radio-frequency power through needles, or microwaves. When the heart stops pumping, we use the defibrillator to restore normal pumping. A brief, high-current pulse through the heart synchronizes all cardiac fibers to restore normal rhythm. When the cardiac rhythm is too slow, we implant the cardiac pacemaker. An electrode within the heart stimulates the cardiac muscle to contract at the normal rate. When the cardiac valves are narrowed or leak, we implant an artificial valve. Silicone rubber and Teflon are used for biocompatibility. Artificial hearts powered by pneumatic hoses have been implanted in humans. However, the quality of life gradually degrades, and death ensues. When kidney stones develop, lithotripsy is used. A spark creates a pressure wave, which is focused on the stone and fragments it. The pieces pass out normally. When kidneys fail, the blood is cleansed during hemodialysis. Urea passes through a porous membrane to a dialysate bath to lower its concentration in the blood. The blind are able to read by scanning the Optacon with their fingertips. A camera scans letters and converts them to an array of vibrating pins. The deaf are able to hear using a cochlear implant. A microphone detects sound and divides it into frequency bands. 22 electrodes within the cochlea stimulate the acoustic the acoustic nerve to provide sound patterns. For those who have lost muscle function in the limbs, researchers are implanting electrodes to stimulate the muscle. Sensors in the legs and arms feed back signals to a computer that coordinates the stimulators to provide limb motion. For those with high spinal cord injury, a puff and sip switch can control a computer and permit the disabled person operate the computer and communicate with the outside world.

  • PDF

고선량률(HDR) 근접치료의 동위원소 Ir-192에 대한 측정방법에 관한 고찰 (A Study on Calibration Procedures for Ir-192 High Dose Rate Brachytherapy Sources)

  • 백태성;이승욱;나수경
    • 대한방사선치료학회지
    • /
    • 제19권1호
    • /
    • pp.19-26
    • /
    • 2007
  • 목 적: Ir-192 고선량율 근접치료(HDR Brachytherapy)선원에 대한 다양한 측정절차의 효율을 서로 비교할 것이며, 측정 장비의 추가 구입 없이 대안으로 새롭게 만든 PMMA (polymethylmethacrylateplastics: $C_5H_8O_2$) plate phantom에 대한임상에서의 적합성과 유용성을 알아 보았다. 대상 및 방법: 세 가지 형태(Well type chamber, Source calibration jig, PMMA plate phantom)의 측정 시스템을 이용하여 측정값을 비교하였다. Source calibration jig와 PMMA plate phantom을 사용 했을 때에는 Farmer type chamber를 이용하여 측정하였으며, 각각 5회씩 측정하여 제조업체의 측정치와 비교하였다. 또한 새로운 선원을 교환 할 때마다 제조업체로부터 선원과 함께 제공되는 제조업체의 측정치를 본 연구의 측정치와 비교하여 방사능의 정확도를 평가 하였다. 결 과: Well type chamber, Source calibration jig, PMMA plate phantom를 사용한 Ir-192 선원의 측정결과 제조업체 측정치와의 상대오차에 대한 RMS (Root Mean Square)값은 Well type chamber에서 0.6%, Source calibration jig에서 1.57%, PMMA plate phantom에서는 2.1%를 나타내었다. 또한 평균 오차에 대한 편차는 Well type chamber에서 $-0.2{\pm}0.5%$, Source calibration jig에서 $0.97{\pm}1.23%$, PMMA plate phantom에서는 $-0.89{\pm}1.87%$를 나타내었다. 결 론: 본 연구를 통해 실험한 세 가지 형태(Well type chamber, Source calibration jig, PMMA plate phantom)의 측정 시스템의 결과에서 나타난 것처럼, Well type chamber를 이용한 측정결과가 가장 우수하게 나타났다. 또한 측정 장비를 구입하지 않고 대안으로 새롭게 만든 PMMA plate phantom의 결과 권고안 ${\pm}5%$를 초과하지 않으므로 임상에서 유용하게 사용 할 수 있을 것이라 생각되어진다.

  • PDF

방열관의 배치와 관내 유속이 온수난방 온실의 온도분포에 미치는 영향 (Effect of Pipes Layout and Flow Velocity on Temperature Distribution in Greenhouses with Hot Water Heating System)

  • 신현호;김영식;남상운
    • 생물환경조절학회지
    • /
    • 제28권4호
    • /
    • pp.335-341
    • /
    • 2019
  • 본 연구는 난방온실의 온도분포 균일화를 위한 기초자료 제공을 목적으로 온수난방 방식의 토마토 재배 온실에서 난방실험을 통하여 난방배관의 표면온도와 실내기온 사이의 상관관계를 분석하고, 난방배관의 열전달특성 분석과 난방배관 배치의 개선을 통하여 난방배관 표면온도의 편차를 줄이고 균일도를 향상시키기 위한 방안을 도출하였다. 서로 다른 두 온실의 온도분포를 분석하여 최대편차와 균일도를 검토한 결과, 온수의 유량이 많고 난방배관의 길이가 짧게 배치된 온실의 온도편차가 작고, 균일도는 높은 것으로 나타났다. 또한 순환팬을 가동한 경우에 온도편차는 작아지고 균일도가 개선되는 것을 확인할 수 있었다. 난방배관의 표면온도와 실내기온 사이의 상관관계를 분석한 결과, 두 온실 모두에서 유의적인(p<0.01) 정적 상관관계가 있는 것으로 나타났다. 온수난방 온실에서 실내기온의 분포는 난방배관 표면온도의 분포에 영향을 받는다는 것을 확인할 수 있었고, 온도편차가 최소화 되도록 난방배관을 배치함으로써 실내기온 분포의 균일도를 개선할 수 있는 것으로 판단되었다. 난방배관의 열전달 특성을 분석한 결과 배관의 길이가 길어지면 온도편차는 커지고, 관내의 유속이 빨라지면 온도편차는 작아지는 것으로 나타났다. 따라서 지선배관의 길이가 짧아지도록 난방배관을 배치하고, 관내의 유속을 제어함으로써 온실의 온도분포와 환경의 균일성을 개선할 수 있을 것으로 판단되었다. 국내 온실에서 가장 많이 사용하고 있는 튜브레일(40A) 방식의 온수난방시스템에서 하나의 지선배관에서의 온도편차를 $3^{\circ}C$ 이내로 조절하기 위해서는 관내의 유속이 0.2, 0.4, 0.6, 0.8, $1.0m{\cdot}s^{-1}$일 때 난방배관의 길이는 각각 40, 80, 120, 160, 200m 이내로 제한해야 하는 것으로 분석되었다.

신 흡수제(KoSol-5)를 적용한 0.1 MW급 Test Bed CO2 포집 성능시험 (0.1 MW Test Bed CO2 Capture Studies with New Absorbent (KoSol-5))

  • 이정현;김범주;신수현;곽노상;이동욱;이지현;심재구
    • 공업화학
    • /
    • 제27권4호
    • /
    • pp.391-396
    • /
    • 2016
  • 한전 전력연구원에서 개발한 고효율 아민계 습식 $CO_2$ 흡수제(KoSol-5)를 적용하여 0.1 MW급 Test Bed $CO_2$ 포집 성능시험을 수행하였다. 500 MW급 석탄화력발전소에서 발생되는 연소 배가스를 적용하여 하루 2톤의 $CO_2$를 처리할 수 있는 연소 후 $CO_2$ 포집기술의 성능을 확인하였으며 또한 국내에서는 유일하게 재생에너지 소비량을 실험적으로 측정함으로써 KoSol-5 흡수제의 성능에 대한 신뢰성 있는 데이터를 제시하고자 하였다. 그리고 주요 공정변수 운전 및 흡수탑 인터쿨링 효율 향상에 따른 에너지 저감 효과를 테스트하였다. 흡수탑에서의 $CO_2$ 제거율은 국제에너지기구 산하 온실가스 프로그램(IEA-GHG)에서 제시하는 $CO_2$ 포집기술 성능평가 기준치($CO_2$ 제거율: 90%)를 안정적으로 유지하였다. 또한 흡수제(KoSol-5)의 재생을 위한 스팀 사용량(재생에너지)은 $2.95GJ/tonCO_2$가 소비되는 것으로 산출되었는데 이는 기존 상용 흡수제(MEA, Monoethanol amine)의 평균 재생에너지 수준(약 $4.0GJ/tonCO_2$) 대비 약 26% 저감 된 수치이다. 본 연구를 통해 한전 전력연구원에서 개발한 KoSol-5 흡수제 및 $CO_2$ 포집 공정의 우수한 $CO_2$ 포집 성능을 확인할 수 있었고, 향후 본 연구에서 성능이 확인된 고효율 흡수제(KoSol-5)를 실증급 $CO_2$ 포집플랜트에 적용할 경우 $CO_2$ 포집비용을 크게 낮출 수 있을 것으로 기대된다.

$^{192}Ir$source를 이용할 자궁경부암 강내치료시 사용하는 packing의 효과에 대한 고찰 (Packing effects on the intracavitary radiation therapy of the utaine lervix cancer)

  • 조정근;이두현;시창근;최윤경;김태윤
    • 대한방사선치료학회지
    • /
    • 제16권1호
    • /
    • pp.73-77
    • /
    • 2004
  • 연구목적 : 자궁경부암 환자에게 시행되는 고선량을 강내조사의 경우 선원이 병소 근처에 위치함으로써 주변의 중요 장기에 최소의 선량이 투여될 수 있도록 치료계획이 가능하다. 하지만 해부학적 위치 상 방광과 직장이 병소와 근접하여, 선량을 최소화하고자 Packing을 시행한다. 본 연구에서는 실제 Packing의 효과를 알아보고자 모의치료시 packing을 시행한 것과 packing을 제거한 후의 사진을 얻어 ICRU(International Commission on Radiation Units) 38에 report 된 방광과 직장의 point dose를 비교하였다. 대상 및 방법 : Packing 전, 후의 방광과 직장의 선량변화를 알아보기 위하여 packing을 시행한 후 AP, LAT film을 촬영하고, 또한 동일한 상태에서 packing을 제거한 후 AP, LAT film을 촬영하여 확인하고자 하는 방광과 직장의 reference point를 각각 표시하고 이를 치료계획장치(PLATO BPS v13.7)에 입력한 후 ICRU 38 report에서 권고한대로 A point에 $100\%$ 선량을 prescription하는 치료계획을 시행하였다. 하지만 rectum의 경우 ICRU에서 제시한 point가 rectum point를 정확히 대변하지 못하는 이유로 maximum point를 찾아 비교하였다. 측정한 값들을 윌콕슨의 부호검정(SAS, 통계분석처리프로그램)을 통하여 packing 효과의 유의성에 대하여 분석하였다. 결 과 : packing을 시행하지 않은 상태에서의 방광과 직장 선량은 각각 $80.1{\pm}16.8(\%),\;82.7{\pm}16.3(\%)$이었고, packing을 시행한 경우 각각 $63.0{\pm}16.3(\%),\;67.6{\pm}16.3(\%)$로, 방광선량의 경우 평균선량이 $17.2{\pm}4.0(\%)$ 감소한 것으로 나타났으며, 직장선량의 경우 평균선량이 $15.1{\pm}9.9(\%)$ 감소한 것으로 나타났다. 이를 윌콕슨의 부호검정을 통하여 분석한 결과 방광과 직장 모두 검정통계량 값이 33으로 매우 크고 p 값이 0.001로서 매우 유의한 차이를 보이고 있다. 다시 말해서 $99\%$의 유의수준에서도 모평균(packing 여부에 따른 방광과 직장선량의 평균의 차이가 정량적 수치로 나타나 packing의 효과를 알 수 있다)의 차이가 뚜렷하다고 볼 수 있다. 결 론 : 자궁경부암에 있어 강내치료는 외부치료와 더불어 반드시 시행하여야 하는 치료로 인식되어 있다. 그 이유는 주변 주요장기인 방광과 직장을 피해 원발 병소에만 방사선량을 현저히 많은 양을 부여함으로써 높은 치료효과를 볼 수 있기 때문이다. 하지만 서른에서 언급하였듯이 해부학적으로 주요장기가 너무 인접하여 있기 때문에 이들을 인위적으로 분리시키지 않으면 안된다. 이러한 목적을 달성하기 위하여 packing을 사용하는데, 본 실험을 통하여 자궁경부암 강내치료시 사용하는 packing 방법의 효과가 매우 탁월함을 알 수 있었고 적극적인 packing을 통해 방사선치료시 가장 우려되는 주변의 정상조직의 부작용이 감소할 수 있도록 최선의 노력을 하여야 할 것으로 사료된다.

  • PDF

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • 제20권2호
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.