• Title/Summary/Keyword: 시스템 활용도

Search Result 26,167, Processing Time 0.063 seconds

Applicability Analysis of Constructing UDM of Cloud and Cloud Shadow in High-Resolution Imagery Using Deep Learning (딥러닝 기반 구름 및 구름 그림자 탐지를 통한 고해상도 위성영상 UDM 구축 가능성 분석)

  • Nayoung Kim;Yerin Yun;Jaewan Choi;Youkyung Han
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.351-361
    • /
    • 2024
  • Satellite imagery contains various elements such as clouds, cloud shadows, and terrain shadows. Accurately identifying and eliminating these factors that complicate satellite image analysis is essential for maintaining the reliability of remote sensing imagery. For this reason, satellites such as Landsat-8, Sentinel-2, and Compact Advanced Satellite 500-1 (CAS500-1) provide Usable Data Masks(UDMs)with images as part of their Analysis Ready Data (ARD) product. Precise detection of clouds and their shadows is crucial for the accurate construction of these UDMs. Existing cloud and their shadow detection methods are categorized into threshold-based methods and Artificial Intelligence (AI)-based methods. Recently, AI-based methods, particularly deep learning networks, have been preferred due to their advantage in handling large datasets. This study aims to analyze the applicability of constructing UDMs for high-resolution satellite images through deep learning-based cloud and their shadow detection using open-source datasets. To validate the performance of the deep learning network, we compared the detection results generated by the network with pre-existing UDMs from Landsat-8, Sentinel-2, and CAS500-1 satellite images. The results demonstrated that high accuracy in the detection outcomes produced by the deep learning network. Additionally, we applied the network to detect cloud and their shadow in KOMPSAT-3/3A images, which do not provide UDMs. The experiment confirmed that the deep learning network effectively detected cloud and their shadow in high-resolution satellite images. Through this, we could demonstrate the applicability that UDM data for high-resolution satellite imagery can be constructed using the deep learning network.

The Advancement of Underwriting Skill by Selective Risk Acceptance (보험Risk 세분화를 통한 언더라이팅 기법 선진화 방안)

  • Lee, Chan-Hee
    • The Journal of the Korean life insurance medical association
    • /
    • v.24
    • /
    • pp.49-78
    • /
    • 2005
  • Ⅰ. 연구(硏究) 배경(背景) 및 목적(目的) o 우리나라 보험시장의 세대가입율은 86%로 보험시장 성숙기에 진입하였으며 기존의 전통적인 전업채널에서 방카슈랑스의 도입, 온라인전문보험사의 출현, TM 영업의 성장세 等멀티채널로 진행되고 있음 o LTC(장기간병), CI(치명적질환), 실손의료보험 등(等)선 진형 건강상품의 잇따른 출시로 보험리스크 관리측면에서 언더라이팅의 대비가 절실한 시점임 o 상품과 마케팅 等언더라이팅 측면에서 매우 밀접한 영역의 변화에 발맞추어 언더라이팅의 인수기법의 선진화가 시급히 요구되는 상황하에서 위험을 적절히 분류하고 평가하는 선진적 언더라이팅 기법 구축이 필수 적임 o 궁극적으로 고객의 다양한 보장니드 충족과 상품, 마케팅, 언더라이팅의 경쟁력 강화를 통한 보험사의 종합이익 극대화에 기여할 수 있는 방안을 모색하고자 함 Ⅱ. 선진보험시장(先進保險市場)Risk 세분화사례(細分化事例) 1. 환경적위험(環境的危險)에 따른 보험료(保險料) 차등(差等) (1) 위험직업 보험료 할증 o 미국, 유럽등(等) 대부분의 선진시장에서는 가입당시 피보험자의 직업위험도에 따라 보험료를 차등 적용중(中)임 o 가입하는 보장급부에 따라 직업 분류방법 및 할증방식도 상이하며 일반사망과 재해사망,납입면제, DI에 대해서 별도의 방법을 사용함 o 할증적용은 표준위험율의 일정배수를 적용하여 할증 보험료를 산출하거나, 가입금액당 일정한 추가보험료를 적용하고 있음 - 광부의 경우 재해사망 가입시 표준위험율의 300% 적용하며, 일반사망 가입시 $1,000당 $2.95 할증보험료 부가 (2) 위험취미 보험료 할증 o 취미와 관련 사고의 지속적 다발로 취미활동도 위험요소로 인식되어 보험료를 차등 적용중(中)임 o 할증보험료는 보험가입금액당 일정비율로 부가(가입 금액과 무관)하며, 신종레포츠 등(等)일부 위험취미는 통계의 부족으로 언더라이터가 할증율 결정하여 적용함 - 패러글라이딩 년(年)$26{\sim}50$회(回) 취미생활의 경우 가입금액 $1,000당 재해사망 $2, DI보험 8$ 할증보험료 부가 o 보험료 할증과는 별도로 위험취미에 대한 부담보를 적용함. 위험취미 활동으로 인한 보험사고 발생시 사망을 포함한 모든 급부에 대한 보장을 부(不)담보로 인수함. (3) 위험지역 거주/ 여행 보험료 할증 o 피보험자가 거주하고 있는 특정국가의 임시 혹은 영구적 거주시 기후위험, 거주지역의 위생과 의료수준, 여행위험, 전쟁과 폭동위험 등(等)을 고려하여 평가 o 일반사망, 재해사망 등(等)보장급부별로 할증보험료 부가 또는 거절 o 할증보험료는 보험全기간에 대해 동일하게 적용 - 러시아의 경우 가입금액 $1,000당 일반사망은 2$의 할증보험료 부가, 재해사망은 거절 (4) 기타 위험도에 대한 보험료 차등 o 비행관련 위험은 세가지로 분류(항공운송기, 개인비행, 군사비행), 청약서, 추가질문서, 진단서, 비행이력 정보를 바탕으로 할증보험료를 부가함 - 농약살포비행기조종사의 경우 가입금액 $1,000당 일반사망 6$의 할증보험료 부가, 재해사망은 거절 o 미국, 일본등(等)서는 교통사고나 교통위반 관련 기록을 활용하여 무(無)사고운전자에 대해 보험료 할인(우량체 위험요소로 활용) 2. 신체적위험도(身體的危險度)에 따른 보험료차등(保險料差等) (1) 표준미달체 보험료 할증 1) 총위험지수 500(초과위험지수 400)까지 인수 o 300이하는 25점단위, 300점 초과는 50점 단위로 13단계로 구분하여 할증보험료를 적용중(中)임 2) 삭감법과 할증법을 동시 적용 o 보험금 삭감부분만큼 할증보험료가 감소하는 효과가 있어 청약자에게 선택의 기회를 제공할수 있으며 고(高)위험 피보험자에게 유용함 3) 특정암에 대한 기왕력자에 대해 단기(Temporary)할증 적용 o 질병성향에 따라 가입후 $1{\sim}5$년간 할증보험료를 부가하고 보험료 할증 기간이 경과한 후에는 표준체보험료를 부가함 4) 할증보험료 반환옵션(Return of the extra premium)의 적용 o 보험계약이 유지중(中)이며, 일정기간 생존시 할증보험료가 반환됨 (2) 표준미달체 급부증액(Enhanced annuity) o 영국에서는 표준미달체를 대상으로 연금급부를 증가시킨 증액형 연금(Enhanced annuity) 상품을 개발 판매중(中)임 o 흡연, 직업, 병력 등(等)다양한 신체적, 환경적 위험도에 따라 표준체에 비해 증액연금을 차등 지급함 (3) 우량 피보험체 가격 세분화 o 미국시장에서는 $8{\sim}14$개 의적, 비(非)의적 위험요소에 대한 평가기준에 따라 표준체를 최대 8개 Class로 분류하여 할인보험료를 차등 적용 - 기왕력, 혈압, 가족력, 흡연, BMI, 콜레스테롤, 운전, 위험취미, 거주지, 비행력, 음주/마약 등(等) o 할인율은 회사, Class, 가입기준에 따라 상이(최대75%)하며, 가입연령은 최저 $16{\sim}20$세, 최대 $65{\sim}75$세, 최저보험금액은 10만달러(HIV검사가 필요한 최저 금액) o 일본시장에서는 $3{\sim}4$개 위험요소에 따라 $3{\sim}4$개 Class로 분류 우량체 할인중(中)임 o 유럽시장에서는 영국 등(等)일부시장에서만 비(非)흡연할인 또는 우량체할인 적용 Ⅲ. 국내보험시장(國內保險市場) 현황(現況)및 문제점(問題點) 1. 환경적위험도(環境的危險度)에 따른 가입한도제한(加入限度制限) (1) 위험직업 보험가입 제한 o 업계공동의 직업별 표준위험등급에 따라 각 보험사 자체적으로 위험등급별 가입한도를 설정 운영중(中)임. 비(非)위험직과의 형평성, 고(高)위험직업 보장 한계, 수익구조 불안정화 등(等)문제점을 내포하고 있음 - 광부의 경우 위험1급 적용으로 사망 최대 1억(億), 입원 1일(日) 2만원까지 제한 o 금융감독원이 2002년(年)7월(月)위험등급별 위험지수를 참조 위험율로 인가하였으나, 비위험직은 70%, 위험직은 200% 수준으로 산정되어 현실적 적용이 어려움 (2) 위험취미 보험가입 제한 o 해당취미의 직업종사자에 준(準)하여 직업위험등급을 적용하여 가입 한도를 제한하고 있음. 추가질문서를 활용하여 자격증 유무, 동호회 가입등(等)에 대한 세부정보를 입수하지 않음 - 패러글라이딩의 경우 위험2급을 적용, 사망보장 최대 2 억(億)까지 제한 (3) 거주지역/ 해외여행 보험가입 제한 o 각(各)보험사별로 지역적 특성상 사고재해 다발 지역에 대해 보험가입을 제한하고 있음 - 강원, 충청 일부지역 상해보험 가입불가 - 전북, 태백 일부지역 입원급여금 1일(日)2만원이내 o 해외여행을 포함한 해외체류에 대해서는 일정한 가입 요건을 정하여 운영중(中)이며, 가입한도 설정 보험가입을 제한하거나 재해집중보장 상품에 대해 거절함 - 러시아의 경우 단기체류는 위험1급 및 상해보험 가입 불가, 장기 체류는 거절처리함 2. 신체적위험도(身體的危險度)에 따른 인수차별화(引受差別化) (1) 표준미달체 인수방법 o 체증성, 항상성 위험에 대한 초과위험지수를 보험금삭감법으로 전환 사망보험에 적용(최대 5년(年))하여 5년(年)이후 보험 Risk노출 심각 o 보험료 할증은 일부 회사에서 주(主)보험 중심으로 사용중(中)이며, 총위험지수 300(8단계)까지 인수 - 주(主)보험 할증시 특약은 가입 불가하며, 암 기왕력자는 대부분 거절 o 신체부위 39가지, 질병 5가지에 대해 부담보 적용(입원, 수술 등(等)생존급부에 부담보) (2) 비(非)흡연/ 우량체 보험료 할인 o 1999년(年)최초 도입 이래 $3{\sim}4$개의 위험요소로 1개 Class 운영중(中)임 S생보사의 경우 비(非)흡연우량체, 비(非)흡연표준체의 2개 Class 운영 o 보험료 할인율은 회사, 상품에 따라 상이하며 최대 22%(영업보험료기준)임. 흡연여부는 뇨스틱을 활용 코티닌테스트를 실시함 o 우량체 판매는 신계약의 $2{\sim}15%$수준(회사의 정책에 따라 상이) Ⅳ. 언더라이팅 기법(技法) 선진화(先進化) 방안(方案) 1. 직업위험도별 보험료 차등 적용 o 생 손보 직업위험등급 일원화와 연계하여 3개등급으로 위험지수개편, 비위험직 기준으로 보험요율 차별적용 2. 위험취미에 대한 부담보 적용 o 해당취미를 원인으로 보험사고(사망포함) 발생시 부담보 제도 도입 3. 표준미달체 인수기법 선진화를 통한 인수범위 대폭 확대 o 보험료 할증법 적용 확대를 통한 Risk 헷지로 총위험지수 $300{\rightarrow}500$으로 확대(거절건 최소화) 4. 보험료 할증법 보험금 삭감 병행 적용 o 삭감기간을 적용한 보험료 할증방식 개발, 고객에게 선택권 제공 5. 기한부 보험료할증 부가 o 위암, 갑상선암 등(等)특정암의 성향에 따라 위험도가 높은 가입초기에 평준할증보험료를 적용하여 인수 6. 보험료 할증법 부가특약 확대 적용, 부담보 병행 사용 o 정기특약 등(等)사망관련 특약에 할증법 확대, 생존급부 특약은 부담보 7. 표준체 고객 세분화 확대 o 콜레스테롤, HDL 등(等)위험평가요소 확대를 통한 Class 세분화 Ⅴ. 기대효과(期待效果) 1. 고(高)위험직종사자, 위험취미자, 표준미달체에 대한 보험가입 문호개방 2. 보험계약자간 형평성 제고 및 다양한 고객의 보장니드에 부응 3. 상품판매 확대 및 Risk헷지를 통한 수입보험료 증대 및 사차익 개선 4. 본격적인 가격경쟁에 대비한 보험사 체질 개선 5. 회사 이미지 제고 및 진단 거부감 해소, 포트폴리오 약화 방지 Ⅵ. 결론(結論) o 종래의 소극적이고 일률적인 인수기법에서 탈피하여 피보험자를 다양한 측면에서 위험평가하여 적정 보험료 부가와 합리적 가입조건을 제시하는 적절한 위험평가 수단을 도입하고, o 언더라이팅 인수기법의 선진화와 함께 언더라이팅 인력의 전문화, 정보입수 및 시스템 인프라의 구축 등이 병행함으로써, o 보험사의 사차손익 관리측면에서 뿐만 아니라 보험시장 개방 및 급변하는 보험환경에 대비한 한국 생보언더라이팅 경쟁력 강화 및 언더라이터의 글로벌화에도 크게 기여할 것임.

  • PDF

Nitrogen Removal Rate of A Subsurface Flow Treatment Wetland System Constructed on Floodplain During Its Initial Operating Stage (하천고수부지 수질정화 여과습지의 초기운영단계 질소제거)

  • Yang, Hong-Mo
    • Korean Journal of Environmental Agriculture
    • /
    • v.22 no.4
    • /
    • pp.278-283
    • /
    • 2003
  • This study was carried out to examine the nitrogen removal rate of a subsurface-flow treatment wetland system which was constructed on floodplain of the Kwangju River from May to June 2001. Its dimensions were 29m in length, 9m in width and 0.65m in depth. A bottom layer of 45cm in depth was filled with crushed granite with about $15{\sim}30\;mm$ in diameter and a middle layer of 10cm in depth had pea pebbles with about 10 mm in diameter. An upper layer of 5 cm in depth contained course sand. Reeds (Phragmites australis) were transplanted on the surface of the system. They were dug out of natural wetlands and stems were cut at about 40 cm height from their bottom ends. Water of the Kwangju River flowed into it via a pipe by gravity flow and its effluent was funneled back into the river. The height of reed stems was 44.2 cm in July 2001 and 75.3cm in September 2001. The number of stems was increased from $80\;stems/m^2$ in July 2001 to $136\;stems/m^2$ in September 2001. Volume and water quality of inflow and outflow were analyzed from July 2001 through December 2001. Inflow and outflow averaged 40.0 and $39.2\;m^3/day$, respectively. Hydraulic detention time was about 1.5 days. Average nitrogen uptake by reeds was $69.31\;N\;mg/m^2/day$. Removal rate of $NO_3-N$, $NH_3-N$, T-N averaged 195.58, 53.65, and $628.44\;mg/m^2/day$, respectively. Changes of $NO_3-N$ and $NH_3-N$ abatement rates were closely related to those of wetland temperatures. The lower removal rate of nitrogen species compared with that of subsurface-flow wetlands operating in North America could be attributed to the initial stage of the system and inclusion of two cold months into the six-month monitoring period. Increase of standing density of reeds within a few years will develop both root zones suitable for the nitrification of ammonia and surface layer substrates beneficial to the denitrification of nitrates into nitrogen gases, which may lead to increment in the nitrogen retention rate.

Conservation Status, Construction Type and Stability Considerations for Fortress Wall in Hongjuupseong (Town Wall) of Hongseong, Korea (홍성 홍주읍성 성벽의 보존상태 및 축성유형과 안정성 고찰)

  • Park, Junhyoung;Lee, Chanhee
    • Korean Journal of Heritage: History & Science
    • /
    • v.51 no.3
    • /
    • pp.4-31
    • /
    • 2018
  • It is difficult to ascertain exactly when the Hongjuupseong (Town Wall) was first constructed, due to it had undergone several times of repair and maintenance works since it was piled up newly in 1415, when the first year of the reign of King Munjong (the 5th King of the Joseon Dynasty). Parts of its walls were demolished during the Japanese occupation, leaving the wall as it is today. Hongseong region is also susceptible to historical earthquakes for geological reasons. There have been records of earthquakes, such as the ones in 1978 and 1979 having magnitudes of 5.0 and 4.0, respectively, which left part of the walls collapsed. Again, in 2010, heavy rainfall destroyed another part of the wall. The fortress walls of the Hongjuupseong comprise various rocks, types of facing, building methods, and filling materials, according to sections. Moreover, the remaining wall parts were reused in repair works, and characteristics of each period are reflected vertically in the wall. Therefore, based on the vertical distribution of the walls, the Hongjuupseong was divided into type I, type II, and type III, according to building types. The walls consist mainly of coarse-grained granites, but, clearly different types of rocks were used for varying types of walls. The bottom of the wall shows a mixed variety of rocks and natural and split stones, whereas the center is made up mostly of coarse-grained granites. For repairs, pink feldspar granites was used, but it was different from the rock variety utilized for Suguji and Joyangmun Gate. Deterioration types to the wall can be categorized into bulging, protrusion of stones, missing stones at the basement, separation of framework, fissure and fragmentation, basement instability, and structural deformation. Manually and light-wave measurements were used to check the amount and direction of behavior of the fortress walls. A manual measurement revealed the sections that were undergoing structural deformation. Compared with the result of the light-wave measurement, the two monitoring methods proved correlational. As a result, the two measuring methods can be used complementarily for the long-term conservation and management of the wall. Additionally, the measurement system must be maintained, managed, and improved for the stability of the Hongjuupseong. The measurement of Nammunji indicated continuing changes in behavior due to collapse and rainfall. It can be greatly presumed that accumulated changes over the long period reached the threshold due to concentrated rainfall and subsequent behavioral irregularities, leading to the walls' collapse. Based on the findings, suggestions of the six grades of management from 0 to 5 have been made, to manage the Hongjuupseong more effectively. The applied suggested grade system of 501.9 m (61.10%) was assessed to grade 1, 29.5 m (3.77%) to grade 2, 10.4 m (1.33%) to grade 3, 241.2 m (30.80%) and grade 4. The sections with grade 4 concentrated around the west of Honghwamun Gate and the east of the battlement, which must be monitored regularly in preparation for a potential emergency. The six-staged management grade system is cyclical, where after performing repair and maintenance works through a comprehensive stability review, the section returned to grade 0. It is necessary to monitor thoroughly and evaluate grades on a regular basis.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Analysis of the Range Verification of Proton using PET-CT (Off-line PET-CT를 이용한 양성자치료에서의 Range 검증)

  • Jang, Joon Young;Hong, Gun Chul;Park, Sey Joon;Park, Yong Chul;Choi, Byung Ki
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.29 no.2
    • /
    • pp.101-108
    • /
    • 2017
  • Purpose: The proton used in proton therapy has a characteristic of giving a small dose to the normal tissue in front of the tumor site while forming a Bragg peak at the cancer tissue site and giving up the maximum dose and disappearing immediately. It is very important to verify the proton arrival position. In this study, we used the off-line PET CT method to measure the distribution of positron emitted from nucleons such as 11C (half-life = 20 min), 150 (half-life = 2 min) and 13N The range and distal falloff point of the proton were verified by measurement. Materials and Methods: In the IEC 2001 Body Phantom, 37 mm, 28 mm, and 22 mm spheres were inserted. The phantom was filled with water to obtain a CT image for each sphere size. To verify the proton range and distal falloff points, As a treatment planning system, SOBP were set at 46 mm on 37 mm sphere, 37 mm on 28 mm, and 33 mm on 22 mm sphere for each sphere size. The proton was scanned in the same center with a single beam of Gantry 0 degree by the scanning method. The phantom was scanned using PET-CT equipment. In the PET-CT image acquisition method, 50 images were acquired per minute, four ROIs including the spheres in the phantom were set, and 10 images were reconstructed. The activity profile according to the depth was compared to the dose profile according to the sphere size established in the treatment plan Results: The PET-CT activity profile decreased rapidly at the distal falloff position in the 37 mm, 28 mm, and 22 mm spheres as well as the dose profile. However, in the SOBP section, which is a range for evaluating the range, the results in the proximal part of the activity profile are different from those of the dose profile, and the distal falloff position is compared with the proton therapy plan and PET-CT As a result, the maximum difference of 1.4 mm at the 50 % point of the Max dose, 1.1 mm at the 45 % point at the 28 mm sphere, and the difference at the 22 mm sphere at the maximum point of 1.2 mm were all less than 1.5 mm in the 37 mm sphere. Conclusion: To maximize the advantages of proton therapy, it is very important to verify the range of the proton beam. In this study, the proton range was confirmed by the SOBP and the distal falloff position of the proton beam using PET-CT. As a result, the difference of the distally falloff position between the activity distribution measured by PET-CT and the proton therapy plan was 1.4 mm, respectively. This may be used as a reference for the dose margin applied in the proton therapy plan.

  • PDF

A Study of Factors Associated with Software Developers Job Turnover (데이터마이닝을 활용한 소프트웨어 개발인력의 업무 지속수행의도 결정요인 분석)

  • Jeon, In-Ho;Park, Sun W.;Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.191-204
    • /
    • 2015
  • According to the '2013 Performance Assessment Report on the Financial Program' from the National Assembly Budget Office, the unfilled recruitment ratio of Software(SW) Developers in South Korea was 25% in the 2012 fiscal year. Moreover, the unfilled recruitment ratio of highly-qualified SW developers reaches almost 80%. This phenomenon is intensified in small and medium enterprises consisting of less than 300 employees. Young job-seekers in South Korea are increasingly avoiding becoming a SW developer and even the current SW developers want to change careers, which hinders the national development of IT industries. The Korean government has recently realized the problem and implemented policies to foster young SW developers. Due to this effort, it has become easier to find young SW developers at the beginning-level. However, it is still hard to recruit highly-qualified SW developers for many IT companies. This is because in order to become a SW developing expert, having a long term experiences are important. Thus, improving job continuity intentions of current SW developers is more important than fostering new SW developers. Therefore, this study surveyed the job continuity intentions of SW developers and analyzed the factors associated with them. As a method, we carried out a survey from September 2014 to October 2014, which was targeted on 130 SW developers who were working in IT industries in South Korea. We gathered the demographic information and characteristics of the respondents, work environments of a SW industry, and social positions for SW developers. Afterward, a regression analysis and a decision tree method were performed to analyze the data. These two methods are widely used data mining techniques, which have explanation ability and are mutually complementary. We first performed a linear regression method to find the important factors assaociated with a job continuity intension of SW developers. The result showed that an 'expected age' to work as a SW developer were the most significant factor associated with the job continuity intention. We supposed that the major cause of this phenomenon is the structural problem of IT industries in South Korea, which requires SW developers to change the work field from developing area to management as they are promoted. Also, a 'motivation' to become a SW developer and a 'personality (introverted tendency)' of a SW developer are highly importantly factors associated with the job continuity intention. Next, the decision tree method was performed to extract the characteristics of highly motivated developers and the low motivated ones. We used well-known C4.5 algorithm for decision tree analysis. The results showed that 'motivation', 'personality', and 'expected age' were also important factors influencing the job continuity intentions, which was similar to the results of the regression analysis. In addition to that, the 'ability to learn' new technology was a crucial factor for the decision rules of job continuity. In other words, a person with high ability to learn new technology tends to work as a SW developer for a longer period of time. The decision rule also showed that a 'social position' of SW developers and a 'prospect' of SW industry were minor factors influencing job continuity intensions. On the other hand, 'type of an employment (regular position/ non-regular position)' and 'type of company (ordering company/ service providing company)' did not affect the job continuity intension in both methods. In this research, we demonstrated the job continuity intentions of SW developers, who were actually working at IT companies in South Korea, and we analyzed the factors associated with them. These results can be used for human resource management in many IT companies when recruiting or fostering highly-qualified SW experts. It can also help to build SW developer fostering policy and to solve the problem of unfilled recruitment of SW Developers in South Korea.

Improved Social Network Analysis Method in SNS (SNS에서의 개선된 소셜 네트워크 분석 방법)

  • Sohn, Jong-Soo;Cho, Soo-Whan;Kwon, Kyung-Lag;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.117-127
    • /
    • 2012
  • Due to the recent expansion of the Web 2.0 -based services, along with the widespread of smartphones, online social network services are being popularized among users. Online social network services are the online community services which enable users to communicate each other, share information and expand human relationships. In the social network services, each relation between users is represented by a graph consisting of nodes and links. As the users of online social network services are increasing rapidly, the SNS are actively utilized in enterprise marketing, analysis of social phenomenon and so on. Social Network Analysis (SNA) is the systematic way to analyze social relationships among the members of the social network using the network theory. In general social network theory consists of nodes and arcs, and it is often depicted in a social network diagram. In a social network diagram, nodes represent individual actors within the network and arcs represent relationships between the nodes. With SNA, we can measure relationships among the people such as degree of intimacy, intensity of connection and classification of the groups. Ever since Social Networking Services (SNS) have drawn increasing attention from millions of users, numerous researches have made to analyze their user relationships and messages. There are typical representative SNA methods: degree centrality, betweenness centrality and closeness centrality. In the degree of centrality analysis, the shortest path between nodes is not considered. However, it is used as a crucial factor in betweenness centrality, closeness centrality and other SNA methods. In previous researches in SNA, the computation time was not too expensive since the size of social network was small. Unfortunately, most SNA methods require significant time to process relevant data, and it makes difficult to apply the ever increasing SNS data in social network studies. For instance, if the number of nodes in online social network is n, the maximum number of link in social network is n(n-1)/2. It means that it is too expensive to analyze the social network, for example, if the number of nodes is 10,000 the number of links is 49,995,000. Therefore, we propose a heuristic-based method for finding the shortest path among users in the SNS user graph. Through the shortest path finding method, we will show how efficient our proposed approach may be by conducting betweenness centrality analysis and closeness centrality analysis, both of which are widely used in social network studies. Moreover, we devised an enhanced method with addition of best-first-search method and preprocessing step for the reduction of computation time and rapid search of the shortest paths in a huge size of online social network. Best-first-search method finds the shortest path heuristically, which generalizes human experiences. As large number of links is shared by only a few nodes in online social networks, most nods have relatively few connections. As a result, a node with multiple connections functions as a hub node. When searching for a particular node, looking for users with numerous links instead of searching all users indiscriminately has a better chance of finding the desired node more quickly. In this paper, we employ the degree of user node vn as heuristic evaluation function in a graph G = (N, E), where N is a set of vertices, and E is a set of links between two different nodes. As the heuristic evaluation function is used, the worst case could happen when the target node is situated in the bottom of skewed tree. In order to remove such a target node, the preprocessing step is conducted. Next, we find the shortest path between two nodes in social network efficiently and then analyze the social network. For the verification of the proposed method, we crawled 160,000 people from online and then constructed social network. Then we compared with previous methods, which are best-first-search and breath-first-search, in time for searching and analyzing. The suggested method takes 240 seconds to search nodes where breath-first-search based method takes 1,781 seconds (7.4 times faster). Moreover, for social network analysis, the suggested method is 6.8 times and 1.8 times faster than betweenness centrality analysis and closeness centrality analysis, respectively. The proposed method in this paper shows the possibility to analyze a large size of social network with the better performance in time. As a result, our method would improve the efficiency of social network analysis, making it particularly useful in studying social trends or phenomena.

Discovering Promising Convergence Technologies Using Network Analysis of Maturity and Dependency of Technology (기술 성숙도 및 의존도의 네트워크 분석을 통한 유망 융합 기술 발굴 방법론)

  • Choi, Hochang;Kwahk, Kee-Young;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.101-124
    • /
    • 2018
  • Recently, most of the technologies have been developed in various forms through the advancement of single technology or interaction with other technologies. Particularly, these technologies have the characteristic of the convergence caused by the interaction between two or more techniques. In addition, efforts in responding to technological changes by advance are continuously increasing through forecasting promising convergence technologies that will emerge in the near future. According to this phenomenon, many researchers are attempting to perform various analyses about forecasting promising convergence technologies. A convergence technology has characteristics of various technologies according to the principle of generation. Therefore, forecasting promising convergence technologies is much more difficult than forecasting general technologies with high growth potential. Nevertheless, some achievements have been confirmed in an attempt to forecasting promising technologies using big data analysis and social network analysis. Studies of convergence technology through data analysis are actively conducted with the theme of discovering new convergence technologies and analyzing their trends. According that, information about new convergence technologies is being provided more abundantly than in the past. However, existing methods in analyzing convergence technology have some limitations. Firstly, most studies deal with convergence technology analyze data through predefined technology classifications. The technologies appearing recently tend to have characteristics of convergence and thus consist of technologies from various fields. In other words, the new convergence technologies may not belong to the defined classification. Therefore, the existing method does not properly reflect the dynamic change of the convergence phenomenon. Secondly, in order to forecast the promising convergence technologies, most of the existing analysis method use the general purpose indicators in process. This method does not fully utilize the specificity of convergence phenomenon. The new convergence technology is highly dependent on the existing technology, which is the origin of that technology. Based on that, it can grow into the independent field or disappear rapidly, according to the change of the dependent technology. In the existing analysis, the potential growth of convergence technology is judged through the traditional indicators designed from the general purpose. However, these indicators do not reflect the principle of convergence. In other words, these indicators do not reflect the characteristics of convergence technology, which brings the meaning of new technologies emerge through two or more mature technologies and grown technologies affect the creation of another technology. Thirdly, previous studies do not provide objective methods for evaluating the accuracy of models in forecasting promising convergence technologies. In the studies of convergence technology, the subject of forecasting promising technologies was relatively insufficient due to the complexity of the field. Therefore, it is difficult to find a method to evaluate the accuracy of the model that forecasting promising convergence technologies. In order to activate the field of forecasting promising convergence technology, it is important to establish a method for objectively verifying and evaluating the accuracy of the model proposed by each study. To overcome these limitations, we propose a new method for analysis of convergence technologies. First of all, through topic modeling, we derive a new technology classification in terms of text content. It reflects the dynamic change of the actual technology market, not the existing fixed classification standard. In addition, we identify the influence relationships between technologies through the topic correspondence weights of each document, and structuralize them into a network. In addition, we devise a centrality indicator (PGC, potential growth centrality) to forecast the future growth of technology by utilizing the centrality information of each technology. It reflects the convergence characteristics of each technology, according to technology maturity and interdependence between technologies. Along with this, we propose a method to evaluate the accuracy of forecasting model by measuring the growth rate of promising technology. It is based on the variation of potential growth centrality by period. In this paper, we conduct experiments with 13,477 patent documents dealing with technical contents to evaluate the performance and practical applicability of the proposed method. As a result, it is confirmed that the forecast model based on a centrality indicator of the proposed method has a maximum forecast accuracy of about 2.88 times higher than the accuracy of the forecast model based on the currently used network indicators.

Construction of Event Networks from Large News Data Using Text Mining Techniques (텍스트 마이닝 기법을 적용한 뉴스 데이터에서의 사건 네트워크 구축)

  • Lee, Minchul;Kim, Hea-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.183-203
    • /
    • 2018
  • News articles are the most suitable medium for examining the events occurring at home and abroad. Especially, as the development of information and communication technology has brought various kinds of online news media, the news about the events occurring in society has increased greatly. So automatically summarizing key events from massive amounts of news data will help users to look at many of the events at a glance. In addition, if we build and provide an event network based on the relevance of events, it will be able to greatly help the reader in understanding the current events. In this study, we propose a method for extracting event networks from large news text data. To this end, we first collected Korean political and social articles from March 2016 to March 2017, and integrated the synonyms by leaving only meaningful words through preprocessing using NPMI and Word2Vec. Latent Dirichlet allocation (LDA) topic modeling was used to calculate the subject distribution by date and to find the peak of the subject distribution and to detect the event. A total of 32 topics were extracted from the topic modeling, and the point of occurrence of the event was deduced by looking at the point at which each subject distribution surged. As a result, a total of 85 events were detected, but the final 16 events were filtered and presented using the Gaussian smoothing technique. We also calculated the relevance score between events detected to construct the event network. Using the cosine coefficient between the co-occurred events, we calculated the relevance between the events and connected the events to construct the event network. Finally, we set up the event network by setting each event to each vertex and the relevance score between events to the vertices connecting the vertices. The event network constructed in our methods helped us to sort out major events in the political and social fields in Korea that occurred in the last one year in chronological order and at the same time identify which events are related to certain events. Our approach differs from existing event detection methods in that LDA topic modeling makes it possible to easily analyze large amounts of data and to identify the relevance of events that were difficult to detect in existing event detection. We applied various text mining techniques and Word2vec technique in the text preprocessing to improve the accuracy of the extraction of proper nouns and synthetic nouns, which have been difficult in analyzing existing Korean texts, can be found. In this study, the detection and network configuration techniques of the event have the following advantages in practical application. First, LDA topic modeling, which is unsupervised learning, can easily analyze subject and topic words and distribution from huge amount of data. Also, by using the date information of the collected news articles, it is possible to express the distribution by topic in a time series. Second, we can find out the connection of events in the form of present and summarized form by calculating relevance score and constructing event network by using simultaneous occurrence of topics that are difficult to grasp in existing event detection. It can be seen from the fact that the inter-event relevance-based event network proposed in this study was actually constructed in order of occurrence time. It is also possible to identify what happened as a starting point for a series of events through the event network. The limitation of this study is that the characteristics of LDA topic modeling have different results according to the initial parameters and the number of subjects, and the subject and event name of the analysis result should be given by the subjective judgment of the researcher. Also, since each topic is assumed to be exclusive and independent, it does not take into account the relevance between themes. Subsequent studies need to calculate the relevance between events that are not covered in this study or those that belong to the same subject.