• Title/Summary/Keyword: 데이터의표현방법

Search Result 1,964, Processing Time 0.037 seconds

A User Profile-based Filtering Method for Information Search in Smart TV Environment (스마트 TV 환경에서 정보 검색을 위한 사용자 프로파일 기반 필터링 방법)

  • Sean, Visal;Oh, Kyeong-Jin;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.97-117
    • /
    • 2012
  • Nowadays, Internet users tend to do a variety of actions at the same time such as web browsing, social networking and multimedia consumption. While watching a video, once a user is interested in any product, the user has to do information searches to get to know more about the product. With a conventional approach, user has to search it separately with search engines like Bing or Google, which might be inconvenient and time-consuming. For this reason, a video annotation platform has been developed in order to provide users more convenient and more interactive ways with video content. In the future of smart TV environment, users can follow annotated information, for example, a link to a vendor to buy the product of interest. It is even better to enable users to search for information by directly discussing with friends. Users can effectively get useful and relevant information about the product from friends who share common interests or might have experienced it before, which is more reliable than the results from search engines. Social networking services provide an appropriate environment for people to share products so that they can show new things to their friends and to share their personal experiences on any specific product. Meanwhile, they can also absorb the most relevant information about the product that they are interested in by either comments or discussion amongst friends. However, within a very huge graph of friends, determining the most appropriate persons to ask for information about a specific product has still a limitation within the existing conventional approach. Once users want to share or discuss a product, they simply share it to all friends as new feeds. This means a newly posted article is blindly spread to all friends without considering their background interests or knowledge. In this way, the number of responses back will be huge. Users cannot easily absorb the relevant and useful responses from friends, since they are from various fields of interest and knowledge. In order to overcome this limitation, we propose a method to filter a user's friends for information search, which leverages semantic video annotation and social networking services. Our method filters and brings out who can give user useful information about a specific product. By examining the existing Facebook information regarding users and their social graph, we construct a user profile of product interest. With user's permission and authentication, user's particular activities are enriched with the domain-specific ontology such as GoodRelations and BestBuy Data sources. Besides, we assume that the object in the video is already annotated using Linked Data. Thus, the detail information of the product that user would like to ask for more information is retrieved via product URI. Our system calculates the similarities among them in order to identify the most suitable friends for seeking information about the mentioned product. The system filters a user's friends according to their score which tells the order of whom can highly likely give the user useful information about a specific product of interest. We have conducted an experiment with a group of respondents in order to verify and evaluate our system. First, the user profile accuracy evaluation is conducted to demonstrate how much our system constructed user profile of product interest represents user's interest correctly. Then, the evaluation on filtering method is made by inspecting the ranked results with human judgment. The results show that our method works effectively and efficiently in filtering. Our system fulfills user needs by supporting user to select appropriate friends for seeking useful information about a specific product that user is curious about. As a result, it helps to influence and convince user in purchase decisions.

Pseudo Image Composition and Sensor Models Analysis of SPOT Satellite Imagery for Inaccessible Area (비접근 지역에 대한 SPOT 위성영상의 Pseudo영상 구성 및 센서모델 분석)

  • 방기인;조우석
    • Korean Journal of Remote Sensing
    • /
    • v.17 no.1
    • /
    • pp.33-44
    • /
    • 2001
  • The paper presents several satellite models and satellite image decomposition methods for inaccessible area where ground control points can hardly acquired in conventional ways. First, 10 different satellite sensor models, which were extended from collinearity condition equations, were developed and then behavior of each sensor model was investigated. Secondly, satellite images were decomposed and also pseudo images were generated. The satellite sensor model extended from collinearity equations was represented by the six exterior orientation parameters in $1^{st}$, $2^{nd}$ and $3^{rd}$ order function of satellite image row. Among them, the rotational angle parameters such as $\omega$(omega) and $\Phi$(phi) correlated highly with positional parameters could be assigned to constant values. For inaccessible area, satellite images were decomposed, which means that two consecutive images were combined as one image, The combined image consists of one satellite image with ground control points and the other without ground control points. In addition, a pseudo image which is an imaginary image, was prepared from one satellite image with ground control points and the other without ground control points. In other words, the pseudo image is an arbitrary image bridging two consecutive images. For the experiments, SPOT satellite images exposed to the similar area in different pass were used. Conclusively, it was found that 10 different satellite sensor models and 5 different decomposed methods delivered different levels of accuracy. Among them, the satellite camera model with 1st order function of image row for positional orientation parameters and rotational angle parameter of kappa, and constant rotational angle parameter omega and phi provided the best 60m maximum error at check point with pseudo images arrangement.

Estimating Fine Particulate Matter Concentration using GLDAS Hydrometeorological Data (GLDAS 수문기상인자를 이용한 초미세먼지 농도 추정)

  • Lee, Seulchan;Jeong, Jaehwan;Park, Jongmin;Jeon, Hyunho;Choi, Minha
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.919-932
    • /
    • 2019
  • Fine particulate matter (PM2.5) is not only affected by anthropogenic emissions, but also intensifies, migrates, decreases by hydrometeorological factors. Therefore, it is essential to understand relationships between the hydrometeorological factors and PM2.5 concentration. In Korea, PM2.5 concentration is measured at the ground observatories and estimated data are given to locations where observatories are not present. In this way, the data is not suitable to represent an area, hence it is impossible to know accurate concentration at such locations. In addition, it is hard to trace migration, intensification, reduction of PM2.5. In this study, we analyzed the relationships between hydrometeorological factors, acquired from Global Land Data Assimilation System (GLDAS), and PM2.5 by means of Bayesian Model Averaging (BMA). By BMA, we also selected factors that have meaningful relationship with the variation of PM2.5 concentration. 4 PM2.5 concentration models for different seasons were developed using those selected factors, with Aerosol Optical Depth (AOD) from MODerate resolution Imaging Spectroradiometer (MODIS). Finally, we mapped the result of the model, to show spatial distribution of PM2.5. The model correlated well with the observed PM2.5 concentration (R ~0.7; IOA ~0.78; RMSE ~7.66 ㎍/㎥). When the models were compared with the observed PM2.5 concentrations at different locations, the correlation coefficients differed (R: 0.32-0.82), although there were similarities in data distribution. The developed concentration map using the models showed its capability in representing temporal, spatial variation of PM2.5 concentration. The result of this study is expected to be able to facilitate researches that aim to analyze sources and movements of PM2.5, if the study area is extended to East Asia.

Analysis of the Time-dependent Relation between TV Ratings and the Content of Microblogs (TV 시청률과 마이크로블로그 내용어와의 시간대별 관계 분석)

  • Choeh, Joon Yeon;Baek, Haedeuk;Choi, Jinho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.163-176
    • /
    • 2014
  • Social media is becoming the platform for users to communicate their activities, status, emotions, and experiences to other people. In recent years, microblogs, such as Twitter, have gained in popularity because of its ease of use, speed, and reach. Compared to a conventional web blog, a microblog lowers users' efforts and investment for content generation by recommending shorter posts. There has been a lot research into capturing the social phenomena and analyzing the chatter of microblogs. However, measuring television ratings has been given little attention so far. Currently, the most common method to measure TV ratings uses an electronic metering device installed in a small number of sampled households. Microblogs allow users to post short messages, share daily updates, and conveniently keep in touch. In a similar way, microblog users are interacting with each other while watching television or movies, or visiting a new place. In order to measure TV ratings, some features are significant during certain hours of the day, or days of the week, whereas these same features are meaningless during other time periods. Thus, the importance of features can change during the day, and a model capturing the time sensitive relevance is required to estimate TV ratings. Therefore, modeling time-related characteristics of features should be a key when measuring the TV ratings through microblogs. We show that capturing time-dependency of features in measuring TV ratings is vitally necessary for improving their accuracy. To explore the relationship between the content of microblogs and TV ratings, we collected Twitter data using the Get Search component of the Twitter REST API from January 2013 to October 2013. There are about 300 thousand posts in our data set for the experiment. After excluding data such as adverting or promoted tweets, we selected 149 thousand tweets for analysis. The number of tweets reaches its maximum level on the broadcasting day and increases rapidly around the broadcasting time. This result is stems from the characteristics of the public channel, which broadcasts the program at the predetermined time. From our analysis, we find that count-based features such as the number of tweets or retweets have a low correlation with TV ratings. This result implies that a simple tweet rate does not reflect the satisfaction or response to the TV programs. Content-based features extracted from the content of tweets have a relatively high correlation with TV ratings. Further, some emoticons or newly coined words that are not tagged in the morpheme extraction process have a strong relationship with TV ratings. We find that there is a time-dependency in the correlation of features between the before and after broadcasting time. Since the TV program is broadcast at the predetermined time regularly, users post tweets expressing their expectation for the program or disappointment over not being able to watch the program. The highly correlated features before the broadcast are different from the features after broadcasting. This result explains that the relevance of words with TV programs can change according to the time of the tweets. Among the 336 words that fulfill the minimum requirements for candidate features, 145 words have the highest correlation before the broadcasting time, whereas 68 words reach the highest correlation after broadcasting. Interestingly, some words that express the impossibility of watching the program show a high relevance, despite containing a negative meaning. Understanding the time-dependency of features can be helpful in improving the accuracy of TV ratings measurement. This research contributes a basis to estimate the response to or satisfaction with the broadcasted programs using the time dependency of words in Twitter chatter. More research is needed to refine the methodology for predicting or measuring TV ratings.

과학자(科學者)의 정보생산(情報生産) 계속성(繼續性)과 정보유통(情報流通)(2)

  • Garvey, W.D.
    • Journal of Information Management
    • /
    • v.6 no.5
    • /
    • pp.131-134
    • /
    • 1973
  • 본고(本稿)시리이즈의 제1보(第一報)에서 우리는 물리(物理), 사회과학(社會科學) 및 공학분야(工學分野)의 12,442명(名)의 과학자(科學者)와 기술자(技術者)에 대한 정보교환활동(情報交換活動)의 78례(例)에 있어서 일반과정(一般過程)과 몇 가지 결과(結果)를 기술(記述)한 바 있다. 4년반(年半) 이상(以上)의 기간(其間)($1966{\sim}1971$)에서 수행(遂行)된 이 연구(硏究)는 현재(現在)의 과학지식(科學知識)의 집성체(集成體)로 과학자(科學者)들이 연구(硏究)를 시작(始作)한 때부터 기록상(記錄上)으로 연구결과(硏究結果)가 취합(聚合)될 때까지 각종(各種) 정형(定形), 비정형(非定形) 매체(媒體)를 통한 유통정보(流通情報)의 전파(傳播)와 동화(同化)에 대한 포괄적(包括的)인 도식(圖式)으로 표시(表示)할 수 있도록 설정(設定)하고 또 시행(施行)되었다. 2보(二報), 3보(三報), 4보(四報)에서는 데이터 뱅크에 수집(蒐集) 및 축적(蓄積)된 데이터의 일반적(一般的)인 기술(記述)을 적시(摘示)하였다. (1) 과학(科學)과 기술(技術)의 정보유통(情報流通)에 있어서 국가적(國家的) 회합(會合)의 역할(役割)(Garvey; 4보(報)) 국가적(國家的) 회합(會合)은 투고(投稿)와 이로 인한 잡지중(雜誌中) 게재간(揭載間)의 상대적(相對的)인 오랜 기간(期間)동안 이러한 연구(硏究)가 공개매체(公開媒體)로 인하여 일시적(一時的)이나마 게재여부(揭載如否)의 불명료성(不明瞭性)을 초래(招來)하기 전(前)에 과학연구(科學硏究)의 초기전파(初期傳播)를 위하여 먼저 행한 주요(主要) 사례(事例)와 마지막의 비정형매체(非定形媒體)의 양자(兩者)를 항상 조직화(組織化)하여 주는 전체적(全體的)인 유통과정(流通過程)에 있어서 명확(明確)하고도 중요(重要)한 기능(機能)을 갖는다는 것을 알 수 있었다. (2) 잡지(雜誌)에 게재(揭載)된 정보(情報)의 생산(生産)과 관련(關聯)되는 정보(情報)의 전파과정(傳播過程)(Garvey; 1보(報)). 이 연구(硏究)를 위해서 우리는 정보유통과정(情報流通過程)을 따라 많은 노력(努力)을 하였는데, 여기서 유통과정(流通過程)의 인상적(印象的)인 면목(面目)은 특별(特別)히 연구(硏究)로부터의 정보(情報)는 잡지(雜誌)에 게재(揭載)되기까지 진정으로는 공개적(公開的)이 못된다는 것과 이러한 사실(事實)은 선진연구(先進硏究)가 자주 시대(時代)에 뒤떨어지게 된다는 것을 발견할 수 있었다. 경험(經驗)이 많은 정보(情報)의 수요자(需要者)는 이러한 폐물화(廢物化)에 매우 민감(敏感)하며 자기(自己) 연구(硏究)에 당면한, 진행중(進行中)이거나 최근(最近) 완성(完成)된 연구(硏究)에 대하여 정보(情報)를 얻기 위한 모든 수단(手段)을 발견(發見)코자 하였다. 예를 들어, 이들은 잡지(雜誌)에 보문(報文)을 발표(發表)하기 전(前)에 발생(發生)하는 정보전파과정(情報傳播過程)을 통하여 유루(遺漏)될지도 모르는 정보(情報)를 얻기 위하여 한 잡지(雜誌)나 2차자료(二次資料) 또는 전형적(典型的)으로 이용(利用)되는 다른 잡지류중(雜誌類中)에서 당해정보(當該情報)가 발견(發見)되기를 기다리지 않는다는 것이다. (3) "정보생산 과학자(情報生産 科學者)"에 의한 정보전파(情報傳播)의 계속성(繼續性)(이 연구(硏究) 시리이즈의 결과(結果)는 본고(本稿)의 주내용(主內容)으로 되어 있다.) 1968/1969년(年)부터 1970/1971년(年)의 이년기간(二年期間)동안 보문(報文)을 낸 과학자(科學者)(1968/1969년(年) 잡지중(雜誌中)에 "질이 높은" 보문(報文)을 발표(發表)한)의 약 2/3는 1968/1969의 보문(報文)과 동일(同一)한 대상영역(對象領域)의 연구(硏究)를 계속(繼續) 수행(遂行)하였다. 그래서 우리는 본연구(本硏究)에 오른 대부분(大部分)의 저자(著者)가 정상적(正常的)인 과학(科學), 즉 연구수행중(硏究遂行中) 의문(疑問)에 대한 완전(完全)한 해답(解答)을 얻게 되는 가장 중요(重要)한 추구(追求)로서 Kuhn(제5보(第5報))에 의하여 기술(技術)된 방법(방법)으로 과학(연구)(科學(硏究))을 실행(實行)하였음을 알았다. 최근(最近)에 연구(硏究)를 마치고 그 결과(結果)를 보문(報文)으로서 발표(發表)한 이들 과학자(科學者)들은 다음 단계(段階)로 해야 할 사항(事項)에 대하여 선행(先行)된 동일견해(同一見解)를 가진 다른 연구자(硏究자)들의 연구(硏究)와 대상(對象)에 밀접(密接)하게 관련(關聯)되고 있다. 이 계속성(繼續性)의 효과(效果)에 대한 지표(指標)는 보문(報文)과 동일(同一)한 영역(領域)에서 연구(硏究)를 계속(繼續)한 저자(著者)들의 약 3/4은 선행(先行) 보문(報文)에 기술(技術)된 연구결과(硏究結果)에서 직접적(直接的)으로 새로운 연구(硏究)가 유도(誘導)되었음을 보고(報告)한 사항(事項)에 반영(反映)되어 있다. 그렇지만 우리들의 데이터는 다음 영역(領域)으로 기대(期待)하지 않은 전환(轉換)을 일으킬 수도 있음을 보여주고 있다. 동일(同一) 대상(對象)에서 연구(硏究)를 속행(續行)하였던 저자(著者)들의 1/5 이상(以上)은 뒤에 새로운 영역(領域)으로 연구(硏究)를 전환(轉換)하였고 또한 이 영역(領域)에서 연구(硏究)를 계속(繼續)하였다. 연구영역(硏究領域)의 이러한 변화(變化)는 연구자(硏究者)의 일반(一般) 정보유통(情報流通) 패턴에 크게 변화(變化)를 보이지는 않는다. 즉 새로운 지적(知的) 문제(問題)에 대한 변화(變化)에서 야기(惹起)되는 패턴에 있어서 저자(著者)들은 오래된 문제(問題)의 방법(方法)과 기술(技術)을 새로운 문제(問題)로 맞추려 한다. 과학사(科學史)의 최근(最近) 해석(解釋)(Hanson: 6보(報))에서 예기(豫期)되었던 바와 같이 정상적(正常的)인 과학(科學)의 계속성(繼續性)은 항상 절대적(絶對的)이 아니며 "과학지식(科學知識)"의 첫발자욱은 예전 연구영역(硏究領域)의 대상(對象)에 관계(關係)없이 나타나는 다른 영역(領域)으로 내딛게 될지도 모른다. 우리들의 연구(硏究)에서 저자(著者)의 1/3은 동일(同一) 영역(領域)의 대상(對象)에서 속계적(續繼的)인 연구(硏究)를 수행(遂行)치 않고 새로운 영역(領域)으로 옮아갔다. 우리는 이와 같은 데이터를 (a) 저자(著者)가 각개과학자(各個科學者)의 활동(活動)을 통하여 집중적(集中的)인 과학적(科學的) 노력(努力)을 시험(試驗)할 때 각자(各自)의 연구(硏究)에 대한 많은 양(量)의 계속성(繼續性)이 어떤 진보중(進步中)의 과학분야(科學分野)에서도 나타난다는 것과 (b) 이 계속성(繼續性)은 과학(科學)에 대한 집중적(集中的) 진보(進步)의 필요적(必要的) 특질(特質)이라는 것을 의미한다. 또한 우리는 이 계속성(繼續性)과 관련(關聯)되는 유통문제(流通問題)라는 새로운 대상영역(對象領域)으로 전환(轉換)할 때 연구(硏究)의 각단계(各段階)의 진보(進步)와 새로운 목적(目的)으로 전환시(轉換時) 양자(兩者)가 다 필요(必要)로 하는 각개(各個) 과학자(科學者)의 정보수요(情報需要)를 위한 시간(時間) 소비(消費)라는 것을 탐지(探知)할 수 있다. 이러한 관찰(觀察)은 정보(情報)의 선택제공(選擇提供)시스팀이 현재(現在) 필요(必要)로 하는 정보(情報)의 만족(滿足)을 위하여는 효과적(效果的)으로 매우 융통성(融通性)을 띠어야 한다는 것을 암시(暗示)하는 것이다. 본고(本稿)의 시리이즈에 기술(記述)된 전정보유통(全情報流通) 과정(過程)의 재검토(再檢討) 결과(結果)는 과학자(科學者)들이 항상 그들의 요구(要求)를 조화(調和)시키는 신축성(伸縮性)있는 유통체제(流通體制)를 발전(發展)시켜 왔다는 것을 시사(示唆)해 주고 있다. 이 시스팀은 정보전파(情報傳播) 사항(事項)을 중심(中心)으로 이루어 지며 또한 이 사항(事項)의 대부분(大部分)의 참여자(參與者)는 자기자신(自己自身)이 과학정보(科學情報) 전파자(傳播者)라는 기본적(基本的)인 정보전파체제(情報傳播體制)인 것이다. 그러나 이 과정(過程)의 유통행위(流通行爲)에서 살펴본 바와 같이 우리는 대부분(大部分)의 정보전파자(情報傳播者)가 역시 정보(情報)의 동화자(同化者)-다시 말해서 과학정보(科學情報)의 생산자(生産者)는 정보(情報)의 이용자(利用者)라는 것을 알 수 있다. 이 연구(硏究)에서 전형적(典型的)인 과학자((科學者)는 과학정보(科學情報)의 생산(生産)이나 전파(傳播)의 양자(兩者)에 연속적(連續的)으로 관계(關係)하고 있음을 보았다. 만일(萬一) 연구자(硏究者)가 한 편(編)의 연구(硏究)를 완료(完了)한다면 이 연구자(硏究者)는 다음에 무엇을 할 것이냐 하는 관념(觀念)을 갖게 되고 따라서 "완료(完了)된" 연구(硏究)에 관한 정보(情報)를 이용(利用)하여 동시(同時)에 새로운 일을 시작(始作)하게 된다. 예를 들어, 한 과학자(科學者)가 동일(同一) 영역(領域)의 다른 동료연구자(同僚硏究者)에게 완전(完全)하며 이의(異議)에 방어(防禦)할 수 있는 보고서(報告書)를 제공(提供)할 수 있는 단계(段階)에 도달(到達)하였다면 우리는 이 과학자(科學者)가 정보유통과정(情報流通過程)에서 많은 역할(役割)을 해낼 수 있다는 것을 알 것이다. 즉 이 과학자(科學者)는 다른 과학자(科學者)들에게 최신(最新)의 과학적(科學的) 결과(結果)를 제공(提供)할 때 하나의 과학정보(科學情報) 전파자(傳播者)가 되며, 이 연구(硏究)의 의의(意義)와 타당성(妥當性)에 관한 논평(論評)이나 비평(批評)을 동료(同僚)로부터 구(求)하는 관점(觀點)에서 보면 이 과학자(科學者)는 하나의 정보탐색자(情報探索者)가 된다. 또한 장래(將來)의 이용(利用)을 위하여 증정(贈呈)이나 동화(同化)한 이 정보(情報)로부터 피이드백을 받아 드렸을 때의 범주(範疇)에서 보면 (잡지(雜誌)에 투고(投稿)하기 위하여 원고(原稿)를 작성(作成)하는 경우에 있어서와 같이) 과학자(科學者)는 하나의 정보이용자(情報利用者)가 되고 이러한 모든 가능성(可能性)에서 정보생산자(情報生産者)는 다음 정보생산(情報生産)에 이미 들어가 있다고 볼 수 있다(저자(著者)들의 2/3는 보문(報文)이 게재(揭載)되기 전(前)에 이미 새로운 연구(硏究)를 시작(始作)하였다). 과학자(科學者)가 자기연구(自己硏究)를 마치고 예비보고서(豫備報告書)를 만든 후(後) 자기연구(自己硏究)에 관한 정보(情報)의 전파(傳播)를 계속하게 되는데 이와 관계(關係)되는 일반적(一般的)인 패턴을 보면 소수(少數)의 동료(同僚)그룹에 출석(出席)하는 경우 (예로 지역집담회)(地域集談會))와 대중(大衆) 앞에서 행(行)하는 경우(예로 국가적 회합(國家的 會合)) 등이 있다. 그러는 동안에 다양성(多樣性) 있는 성문보고서(成文報告書)가 이루어진다. 그러나 과학자(科學者)들이 자기연구(自己硏究)를 위한 주정보전파목표(主情報傳播目標)는 과학잡지중(科學雜誌中)에 게재(揭載)되는 보문(報文)이라는 것이 명확(明確)한 사실(事實)인 것이다. 이러한 목표(目標)에 도달(到達)할 때까지의 각(各) 정보전파단계(情報傳播段階)에서 과학자(科學者)들은 목표달성(目標達成)을 위하여 청중(聽衆), 자기동화(自己同化)된 정보(情報) 및 이미 이용(利用)된 정보(情報)로부터 피이드백을 탐색(探索)하게 된다. 우리가 본고(本稿)의 시리이즈중(中)에 표현(表現)하려 했던 바와 같이 이러한 활동(活動)은 조사수임자(調査受任者)의 의견(意見)이 원고(原稿)에 반영(反映)되고 또 그 원고(原稿)가 잡지게재(雜誌揭載)를 위해 수리(受理)될 때까지 계속적(繼續的)으로 정보(情報)를 탐색(探索)하는 과학자(科學者)나 기타(其他)사람들에게 효과적(效果的)이었다. 원고(原稿)가 수리(受理)되면 그 원고(原稿)의 저자(著者)들은 그 보문(報文)의 주내용(主內容)에 대하여 적극적(積極的)인 정보전파자(情報傳播者)로서의 역할(役割)을 종종 중지(中止)하는 일이 있는데 이때에는 저자(著者)들의 역할(役割)이 변화(變化)하는 것을 볼 수 있었다. 즉 이 저자(著者)들은 일시적(一時的)이긴 하나 새로운 일을 착수(着手)하기 위하여 정보(情報)의 동화자(同化者)를 찾게 된다. 또한 전(前)에 행한 일에 대한 의견(意見)이나 비평(批評)이 새로운 일에 영향(影響)을 끼치게 된다. 동시(同時)에 새로운 과학정보생산(科學情報生産) 과정(過程)에 들어가게 되고 현재(現在) 진행중(進行中)이거나 최근(最近) 완료(完了)한 연구(硏究)에 대한 정보(情報)를 항상 찾게 된다. 활발(活潑)한 연구(硏究)를 하는 과학자(科學者)들에게는, 동화자(同化者)로서의 역할(役割)과 전파자(傳播者)로서의 역할(役割)을 분리(分離)시킨다는 것은 실제적(實際的)은 못된다. 즉 후자(後者)를 완성(完成)하기 위해서는 전자(前者)를 이용(利用)하게 된다는 것이다. 과학자(科學者)들은 한 단계(段階)에서 한 전파자(傳播者)로서의 역할(役割)이 뚜렷하나 다른 단계(段階)에서는 정보교환(情報交換)이 기본적(基本的)으로 정보동화(情報同化)에 직결(直結)되고 있는 것이다. 정보전파자(情報傳播者)와 정보동화자간(情報同化者間)의 상호관계(相互關係)(또는 정보생산자(情報生産者)와 정보이용자간(情報利用者間))는 과학(科學)에 있어서 하나의 필수양상(必修樣相)이다. 과학(科學)의 유통구조(流通構造)가 전파자(傳播者)(이용자(利用者)로서의 역할(役割)보다는)의 필요성(必要性)에서 볼 때 복잡(複雜)하고 다이나믹한 시스팀으로 구성(構成)된다는 사실(事實)은 과학(科學)의 발전과정(發展過程)에서 필연적(必然的)으로 나타난다. 이와 같은 사실(事實)은 과학정보(科學情報)의 전파요원(傳播要員)이 국가적 회합(國家的 會合)에서 자기연구(自己硏究)에 대한 정보(情報)의 전파기회(傳播機會)를 거절(拒絶)하고 따라서 전파정보(電波情報)를 판단(判斷)하고 선별(選別)하는 것을 감소(減少)시키며 결과적(結果的)으로 잡지(雜誌)나 단행본(單行本)에서 비평(批評)을 하고 추고(推敲)하는 것이 배제(排除)될 때는 유형적(有形的) 과학(科學)은 급속(急速)히 비과학성(非科學性)을 띠게 된다는 것을 Lysenko의 생애(生涯)에 대한 Medvedev의 기술중(記述中)[7]에 지적(指摘)한 것과 관계(關係)되고 있다.

  • PDF

Automatic Quality Evaluation with Completeness and Succinctness for Text Summarization (완전성과 간결성을 고려한 텍스트 요약 품질의 자동 평가 기법)

  • Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.125-148
    • /
    • 2018
  • Recently, as the demand for big data analysis increases, cases of analyzing unstructured data and using the results are also increasing. Among the various types of unstructured data, text is used as a means of communicating information in almost all fields. In addition, many analysts are interested in the amount of data is very large and relatively easy to collect compared to other unstructured and structured data. Among the various text analysis applications, document classification which classifies documents into predetermined categories, topic modeling which extracts major topics from a large number of documents, sentimental analysis or opinion mining that identifies emotions or opinions contained in texts, and Text Summarization which summarize the main contents from one document or several documents have been actively studied. Especially, the text summarization technique is actively applied in the business through the news summary service, the privacy policy summary service, ect. In addition, much research has been done in academia in accordance with the extraction approach which provides the main elements of the document selectively and the abstraction approach which extracts the elements of the document and composes new sentences by combining them. However, the technique of evaluating the quality of automatically summarized documents has not made much progress compared to the technique of automatic text summarization. Most of existing studies dealing with the quality evaluation of summarization were carried out manual summarization of document, using them as reference documents, and measuring the similarity between the automatic summary and reference document. Specifically, automatic summarization is performed through various techniques from full text, and comparison with reference document, which is an ideal summary document, is performed for measuring the quality of automatic summarization. Reference documents are provided in two major ways, the most common way is manual summarization, in which a person creates an ideal summary by hand. Since this method requires human intervention in the process of preparing the summary, it takes a lot of time and cost to write the summary, and there is a limitation that the evaluation result may be different depending on the subject of the summarizer. Therefore, in order to overcome these limitations, attempts have been made to measure the quality of summary documents without human intervention. On the other hand, as a representative attempt to overcome these limitations, a method has been recently devised to reduce the size of the full text and to measure the similarity of the reduced full text and the automatic summary. In this method, the more frequent term in the full text appears in the summary, the better the quality of the summary. However, since summarization essentially means minimizing a lot of content while minimizing content omissions, it is unreasonable to say that a "good summary" based on only frequency always means a "good summary" in its essential meaning. In order to overcome the limitations of this previous study of summarization evaluation, this study proposes an automatic quality evaluation for text summarization method based on the essential meaning of summarization. Specifically, the concept of succinctness is defined as an element indicating how few duplicated contents among the sentences of the summary, and completeness is defined as an element that indicating how few of the contents are not included in the summary. In this paper, we propose a method for automatic quality evaluation of text summarization based on the concepts of succinctness and completeness. In order to evaluate the practical applicability of the proposed methodology, 29,671 sentences were extracted from TripAdvisor 's hotel reviews, summarized the reviews by each hotel and presented the results of the experiments conducted on evaluation of the quality of summaries in accordance to the proposed methodology. It also provides a way to integrate the completeness and succinctness in the trade-off relationship into the F-Score, and propose a method to perform the optimal summarization by changing the threshold of the sentence similarity.

Application of MicroPACS Using the Open Source (Open Source를 이용한 MicroPACS의 구성과 활용)

  • You, Yeon-Wook;Kim, Yong-Keun;Kim, Yeong-Seok;Won, Woo-Jae;Kim, Tae-Sung;Kim, Seok-Ki
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.51-56
    • /
    • 2009
  • Purpose: Recently, most hospitals are introducing the PACS system and use of the system continues to expand. But small-scaled PACS called MicroPACS has already been in use through open source programs. The aim of this study is to prove utility of operating a MicroPACS, as a substitute back-up device for conventional storage media like CDs and DVDs, in addition to the full-PACS already in use. This study contains the way of setting up a MicroPACS with open source programs and assessment of its storage capability, stability, compatibility and performance of operations such as "retrieve", "query". Materials and Methods: 1. To start with, we searched open source software to correspond with the following standards to establish MicroPACS, (1) It must be available in Windows Operating System. (2) It must be free ware. (3) It must be compatible with PET/CT scanner. (4) It must be easy to use. (5) It must not be limited of storage capacity. (6) It must have DICOM supporting. 2. (1) To evaluate availability of data storage, we compared the time spent to back up data in the open source software with the optical discs (CDs and DVD-RAMs), and we also compared the time needed to retrieve data with the system and with optical discs respectively. (2) To estimate work efficiency, we measured the time spent to find data in CDs, DVD-RAMs and MicroPACS. 7 technologists participated in this study. 3. In order to evaluate stability of the software, we examined whether there is a data loss during the system is maintained for a year. Comparison object; How many errors occurred in randomly selected data of 500 CDs. Result: 1. We chose the Conquest DICOM Server among 11 open source software used MySQL as a database management system. 2. (1) Comparison of back up and retrieval time (min) showed the result of the following: DVD-RAM (5.13,2.26)/Conquest DICOM Server (1.49,1.19) by GE DSTE (p<0.001), CD (6.12,3.61)/Conquest (0.82,2.23) by GE DLS (p<0.001), CD (5.88,3.25)/Conquest (1.05,2.06) by SIEMENS. (2) The wasted time (sec) to find some data is as follows: CD ($156{\pm}46$), DVD-RAM ($115{\pm}21$) and Conquest DICOM Server ($13{\pm}6$). 3. There was no data loss (0%) for a year and it was stored 12741 PET/CT studies in 1.81 TB memory. In case of CDs, On the other hand, 14 errors among 500 CDs (2.8%) is generated. Conclusions: We found that MicroPACS could be set up with the open source software and its performance was excellent. The system built with open source proved more efficient and more robust than back-up process using CDs or DVD-RAMs. We believe that the operation of the MicroPACS would be effective data storage device as long as its operators develop and systematize it.

  • PDF

A Theoretical Study for Estimation of Oxygen Effect in Radiation Therapy (방사선 조사시 산소가 세포에 미치는 영향의 이론적 분석)

  • Rena J. Lee;HyunSuk Suh
    • Progress in Medical Physics
    • /
    • v.11 no.2
    • /
    • pp.157-165
    • /
    • 2000
  • Purpose: For estimation of yields of l)NA damages induced by radiation and enhanced by oxygen, a mathematical model was used and tested. Materials and Methods: Reactions of the products of water radiolysis were modeled as an ordinary time dependant equations. These reactions include formation of radicals, DNA damage, damage repair, restitution, and damage fixation by oxygen and H-radical. Several rate constants were obtained from literature while others were calculated by fitting an experimental data. Sensitivity studies were performed changing the chemical rate constant at a constant oxygen number density and varying the oxygen concentration. The effects of oxygen concentration as well as the damage fixation mechanism by oxygen were investigated. Oxygen enhancement ratio(OER) was calculated to compare the simulated data with experimental data. Results: Sensitivity studies with oxygen showed that DNA survival was a function of both oxygen concentration and the magnitude of chemical rate constants. There were no change in survival fraction as a function of dose while the oxygen concentration change from 0 to 1.0 x 10$^{7}$ . When the oxygen concentration change from 1.0 $\times$ 107 to 1.0 $\times$ 101o, there was significant decrease in cell survival. The OER values obtained from the simulation study were 2.32 at 10% cell survival level and 1.9 at 45% cell survival level. Conclusion: Sensitivity studies with oxygen demonstrated that the experimental data were reproduced with the effects being enhanced for the cases where the oxygen rate constants are largest and the oxygen concentration is increased. OER values obtained from the simulation study showed good agreement for a low level of cell survival. This indicated that the use of the semi-empirical model could predict the effect of oxygen in cell killing.

  • PDF

Mapping and estimating forest carbon absorption using time-series MODIS imagery in South Korea (시계열 MODIS 영상자료를 이용한 산림의 연간 탄소 흡수량 지도 작성)

  • Cha, Su-Young;Pi, Ung-Hwan;Park, Chong-Hwa
    • Korean Journal of Remote Sensing
    • /
    • v.29 no.5
    • /
    • pp.517-525
    • /
    • 2013
  • Time-series data of Normal Difference Vegetation Index (NDVI) obtained by the Moderate-resolution Imaging Spectroradiometer(MODIS) satellite imagery gives a waveform that reveals the characteristics of the phenology. The waveform can be decomposed into harmonics of various periods by the Fourier transformation. The resulting $n^{th}$ harmonics represent the amount of NDVI change in a period of a year divided by n. The values of each harmonics or their relative relation have been used to classify the vegetation species and to build a vegetation map. Here, we propose a method to estimate the annual amount of carbon absorbed on the forest from the $1^{st}$ harmonic NDVI value. The $1^{st}$ harmonic value represents the amount of growth of the leaves. By the allometric equation of trees, the growth of leaves can be considered to be proportional to the total amount of carbon absorption. We compared the $1^{st}$ harmonic NDVI values of the 6220 sample points with the reference data of the carbon absorption obtained by the field survey in the forest of South Korea. The $1^{st}$ harmonic values were roughly proportional to the amount of carbon absorption irrespective of the species and ages of the vegetation. The resulting proportionality constant between the carbon absorption and the $1^{st}$ harmonic value was 236 tCO2/5.29ha/year. The total amount of carbon dioxide absorption in the forest of South Korea over the last ten years has been estimated to be about 56 million ton, and this coincides with the previous reports obtained by other methods. Considering that the amount of the carbon absorption becomes a kind of currency like carbon credit, our method is very useful due to its generality.

Access Restriction by Packet Capturing during the Internet based Class (인터넷을 이용한 수업에서 패킷캡쳐를 통한 사이트 접속 제한)

  • Yi, Jungcheol;Lee, Yong-Jin
    • 대한공업교육학회지
    • /
    • v.32 no.1
    • /
    • pp.134-152
    • /
    • 2007
  • This study deals with the development of computer program which can restrict students to access to the unallowable web sites during the Internet based class. Our suggested program can find the student's access list to the unallowable sites, display it on the teacher's computer screen. Through the limitation of the student's access, teacher can enhance the efficiency of class and fulfill his educational purpose for the class. The use of our results leads to the effective and safe utilization of the Internet as the teaching tools in the class. Meanwhile, the typical method is to turn off the LAN (Local Area Network) power in order to limit the student's access to the unallowable web sites. Our program has been developed on the Linux operating systems in the small network environment. The program includes following five functions: the translation function to change the domain name into the IP(Internet Protocol) address, the search function to find the active students' computers, the packet snoop to capture the ongoing packets and investigate their contents, the comparison function to compare the captured packet contents with the predefined access restriction IP address list, and the restriction function to limit the network access when the destination IP address is equal to the IP address in the access restriction list. Our program can capture all passing packets through the computer laboratory in real time and exactly. In addition, it provides teacher's computer screen with the all relation information of students' access to the unallowable sites. Thus, teacher can limit the student's unallowable access immediately. The proposed program can be applied to the small network of the elementary, junior and senior high school. Our research results make a contribution toward the effective class management and the efficient computer laboratory management. The related researches provides teacher with the packet observation and the access limitation for only one host, but our suggested program provides teacher with those for all active hosts.