• 제목/요약/키워드: R&E network

검색결과 263건 처리시간 0.034초

M/W 중계장치 대체를 위한 B-WLL 및 광전송 장치의 품질과 경제성 분석에 대한 연구 (A Study on Quality and Economical Analysis of B-WLL and Optical Transmission Systems for Substituting M/W Relay System)

  • 서경환;최용석
    • 한국전자파학회논문지
    • /
    • 제15권9호
    • /
    • pp.809-819
    • /
    • 2004
  • 본 논문에서는 M/W 중계 주파수의 증설 또는 신규 수용을 가급적 배제하고 필요 회선을 B-WLL 또는 광통신망으로 대체 가능성을 품질 및 경제성 측면에서 분석하였다. 품질 목표에서는 매체간 비교를 위한 전제조건 및 분석방법을 도출하였다. 그리고 E1급을 기준으로 비트오율(BER) $10^{-6}$ 을 만족하도록 기준을 설정하고, 매체별로 가용도에 대한 거리를 산출하였다. 또한 경제성 측면에서는 광통신, M/W, B-WLL 시스템의 기본 구성 단위를 기반으로 서비스 기간, 장비용량, 전송거리, 대용량 전송으로 분류하고 다양한 광전송 회선 및 B-WLL 구성에 대해 분석하였다. 목표 가용도를 99.999 %로 설정시 B-WLL로 M/W를 반경 1.6 km까지 대체 가능하다는 것을 알 수 있었다. 또한 광으로 대체할 경우, 광 관로용의 전주 임대 시에는 서비스 기간에 관계없이 M/W보다 경제성이 있으며, 광 관로 자체를 굴착할 경우에는 고려 방법 중의 어느 경우에 대해서도 경제성이 없는 것으로 판명되었다.

물류 및 유통산업의 블록체인 활용과 정책 방향 (Application and Policy Direction of Blockchain in Logistics and Distribution Industry)

  • 김기흥;심재현
    • 산경연구논집
    • /
    • 제9권6호
    • /
    • pp.77-85
    • /
    • 2018
  • Purpose - The purpose of this study is to subdivide trade transaction-centered structure in a logistics/distribution industry system to apply blockchain, to establish and resolve with which types of technology, and to provide policy direction of government institution and technology to apply blockchain in this kind of industry. Research design, data, and methodology - This study was conducted with previous researches centered on cases applied in various industry sectors on the basis of blockchain technology. Results - General fields of blockchain application include digital contents distribution, IoT platform, e-Commerce, real-estate transaction, decentralized app. development(storage), certification service, smart contract, P2P network infrastructure, publication/storage of public documents, smart voting, money exchange, payment/settlement, banking security platform, actual asset storage, stock transaction and crowd funding. Blockchain is being applied in various fields home and abroad and its application cases can be explained in the banking industry, public sector, e-Commerce, medical industry, distribution and supply chain management, copyright protection. As examined in the blockchain application cases, it is expected to establish blockchain that can secure safety through distributed ledger in trade transaction because blockchain is established and applied in various sectors of industries home and abroad. Parties concerned of trade transaction can secure visibility even in interrupted specific section when they provide it as a base for distributed ledger application in trade and establish trade transaction model by applying blockchain. In case of interrupted specific section by using distributed ledger, blockchain model of trade transaction needs to be formed to make it possible for parties concerned involved in trade transaction to secure visibility and real-time tracking. Additionally, management should be possible from the time of contract until payment, freight transfer to buyers through land, air and maritime transportation. Conclusions - In order to boost blockchain-based logistics/distribution industry, the government, institutionally, needs to back up adding legal plan of shipping, logistics and distribution, reviewing standardization of electronic switching system and coming up with blockchain-based industrial road maps. In addition, the government, technologically, has to support R&D for integration with other high technology, standardization of distribution industry's blockchain technology and manpower training to expand technology development.

한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발 (DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA)

  • 박만배
    • 대한교통학회:학술대회논문집
    • /
    • 대한교통학회 1995년도 제27회 학술발표회
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

RGG/WSN을 위한 분산 저장 부호의 성능 분석 (A Performance Analysis of Distributed Storage Codes for RGG/WSN)

  • 정호영
    • 한국정보전자통신기술학회논문지
    • /
    • 제10권5호
    • /
    • pp.462-468
    • /
    • 2017
  • 본 논문에서는 IoT/WSN을 랜덤 기하 그래프를 이용하여 모델링하고 WSN에서 발생되는 데이터를 효율적으로 저장하기 위해 사용되는 지역 부호의 성능을 고찰하였다.. 노드 수가 n=100, 200인 무선 센서 네트워크를 랜덤 기하 그래프로 모델링하여 분산화된 저장 코드의 복호 성능을 시뮬레이션을 통해 분석하였다. 네트워크의 총 노드 수가 n=100일 때와 200일 때 복호율 ${\eta}$에 따른 복호 성공률은 노드 수 n보다는 소스 노드 수 k값에 따라 좌우됨을 알 수 있었다. 특히 n 값에 관계없이 $${\eta}{\leq_-}2.0$$일 때 복호 성공 확률은 70%를 상회함을 알 수 있었다. 복호 율 ${\eta}$에 따른 복호 연산 량을 살펴본 바, BP 복호 방식의 복호 연산 량은 소스 노드 수 k 값이 증가함에 따라 기하급수적으로 증가함을 알 수 있었다. 이는 소스 노드의 수가 증가할수록 LT 부호의 길이가 길어지고 이에 따라 복호 연산량이 크게 증가하는데 원인이 있는 것으로 생각된다.

국가 과학기술 표준분류 체계 기반 연구보고서 문서의 자동 분류 연구 (Research on Text Classification of Research Reports using Korea National Science and Technology Standards Classification Codes)

  • 최종윤;한혁;정유철
    • 한국산학기술학회논문지
    • /
    • 제21권1호
    • /
    • pp.169-177
    • /
    • 2020
  • 과학기술 분야의 연구·개발 결과는 연구보고서 형태로 국가과학기술정보서비스(NTIS)에 제출된다. 각 연구보고서는 국가과학기술 표준 분류체계 (K-NSCC)에 따른 분류코드를 가지고 있는데, 보고서 작성자가 제출 시에 수동으로 입력하게끔 되어있다. 하지만 2000여 개가 넘는 세분류를 가지고 있기에, 분류체계에 대한 정확한 이해가 없이는 부정확한 분류코드를 선택하기 십상이다. 새로이 수집되는 연구보고서의 양과 다양성을 고려해 볼 때, 이들을 기계적으로 보다 정확하게 분류할 수 있다면 보고서 제출자의 수고를 덜어줄 수 있을 뿐만 아니라, 다른 부가 가치적인 분석 서비스들과의 연계가 수월할 것이다. 하지만, 국내에서 과학기술표준 분류체계에 기반을 둔 문서 자동 분류 연구 사례는 거의 없으며 공개된 학습데이터도 전무하다. 본 연구는 KISTI가 보유하고 있는 최근 5년간 (2013년~2017년) NTIS 연구보고서 메타정보를 활용한 최초의 시도로써, 방대한 과학기술표준 분류체계를 기반으로 하는 국내 연구보고서들을 대상으로 높은 성능을 보이는 문서 자동 분류기법을 도출하는 연구를 진행하였다. 이를 위해, 과학기술 표준분류 체계에서 과학기술 분야의 연구보고서를 분류하기에 적합한 중분류 210여 개를 선별하였으며, 연구보고서 메타 데이터의 특성을 고려한 전처리를 진행하였다. 특히, 가장 영향력 있는 필드인 과제명(제목)과 키워드만을 이용한 TK_CNN 기반의 딥러닝 기법을 제안한다. 제안 모델은 텍스트 분류에서 좋은 성능을 보이고 있는 기계학습법들 (예, Linear SVC, CNN, GRU등)과 비교하였으며, Top-3 F1점수 기준으로 1~7%에 이르는 성능 우위를 확인하였다.

공급사슬 관리 구축전략에 관한 연구: LG전자 사례 중심으로 (A study of SCM strategic plan: Focusing on the case of LG electronics)

  • 이기원;이상윤
    • 유통과학연구
    • /
    • 제9권3호
    • /
    • pp.83-94
    • /
    • 2011
  • 국내에서는 일부 대기업을 제외하고는 공급사슬 관리(SCM) 체제 구축에 매우 소극적이며, 중소기업들은 SCM관리의 개념조차 인식하고 있는 경우가 대 다수이다. 이는 공급망 비효율적 관리로 인한 국내 제조업체, 협력업체, 유통업체 및 물류업체들 재고관리 비용, 수요관리 비용 등 비 효율화를 초래하고 있으며, 나아가 국내 기업의 경쟁력 저하에도 큰 영향력을 미치고 있다. 그 이유는 공급사슬 관리(SCM) 태생적인 특징인 공급사슬 관리(SCM) 전체에 대한 정보 공유 및 프로세스 혁신과, 공급사슬 관리(SCM)가 갖는 광범위함 때문이라고도 할 수 가 있다. 이 논문은 성공적인 공급사슬 관리(SCM) 추진을 위한 공급사슬 관리(SCM) 관련 이론적 논의와 구축전략과 도입 및 성공사례를 연구 및 분석을 통하여 현상황에 대한 고찰과 개선방안에 대해서 제안해보고자 한다. 성공적인 공급사슬 관리(SCM) 추진을 위한 방안을 논의하기 위해서는 먼저 공급사슬 관리(SCM)에 대한 이론적 배경에 대한 고찰이 필요하다. 따라서 II장에서는 공급사슬 관리(SCM)에 대한 기본적인 개념과 필요성에 대해서 기술하고, III장에서는 현재 추진되고 있는 공급사슬 관리(SCM)에 대한 문제점에 대해서 기술할 것이다. 마지막으로IV, V장에서는 공급사슬 관리(SCM) 구축전략과 LG전자사례 및 결론을 기술할 것이다.

  • PDF

뇌기반 진화적 과학 교수학습 모형의 개발 (Development of a Model of Brain-based Evolutionary Scientific Teaching for Learning)

  • 임채성
    • 한국과학교육학회지
    • /
    • 제29권8호
    • /
    • pp.990-1010
    • /
    • 2009
  • 이 연구에서는 뇌기반 진화적 교육 원리를 도출하기 위하여, 인간 뇌의 구조적 기능적 특징, 개체간과 개체내에서 일어나는 생물학적 진화, 뇌내에서 일어나는 진화적 과정, 과학 자체와 개별 과학자의 과학적 활동에 내재된 진화적 속성에 관한 연구물을 리뷰하였다. 이렇게 하여 도출된 인간 뇌의 주요 특징과 생성-선택-파지를 핵심 요소로 하는 보편 다윈주의 혹은 보편 선택주의를 토대로, 뇌기반 진화적 과학 교수 학습 모형을 개발하였다. 이 모형은 세 가지 요소와 세 가지 단계 및 평가로 이루어진다. 세 가지 요소는 정의적, 행동적, 인지적 요소이고, 각 요소를 구성하는 세 단계는 다양화 $\rightarrow$ 비교 선택 $\rightarrow$ 확장 적용(ABC-DEF; Affective, Behavioral, Cognitive components - Diversifying$\rightarrow$Emulating, Estimating, Evaluating $\rightarrow$ Furthering steps)이다. 이 모형에서 정의적 요소 (A)는 인간 뇌에서 감성을 관장하는 대뇌변연계에 토대를 두고 자연 사물과 현상에 대한 학습자의 흥미 호기심과 관련된다. 행동적 요소(B)는 시각 정보를 처리하는 후두엽, 언어 정보의 이해.생성과 관련된 측두엽, 감각운동 정보를 처리하는 감각운동령을 수반하고 과학적 활동의 직접 해보기와 관련된다. 인지적 요소(C)는 사고, 계획, 판단, 문제해결과 관련된 전두엽합령에 토대를 둔다. 이 모형은 이러한 측면에서 '뇌기반(brain-based)'이다. 이 모형의 세 가지 각 요소를 구성하는 세 단계에서, 다양화 단계(D)는 각 요소에서 다양한 변이체를 생성하는 과정이고, 가치나 유용성에 비추어 비교.선택하는 단계(E)는 변이체들 중 유용하거나 가치 있는 것을 검증하여 선택하는 과정이며, 확장.적용 단계(F)는 선택된 것을 유사한 상황으로 확장하거나 적용하는 단계이다. 이 모형은 이러한 측면에서 '진화적(evolutionary)'이다. ABC 세 요소에 대해, 과학적 활동에서 감성적 요인이 출발점으로 갖는 중요성과 뇌에서 사고 기능과 관련되는 신피질에 비해 감성을 관장하는 대뇌변연계의 우세한 역할을 반영하여 DARWIN (Driving Affective Realm for Whole Intellectual Network) 접근법을 강조한다. 이 모형은 학교 현장에서 다루는 과학 주제와 학생의 특징에 따라 다양한 형태와 수준으로 융통성 있게 실행될 수 있다.

미국 네브라스카의 관개된 옥수수 농업생태계의 복사, 에너지 및 엔트로피의 교환 (Radiation, Energy, and Entropy Exchange in an Irrigated-Maize Agroecosystem in Nebraska, USA)

  • 양현영;요하나 마리아 인드라와티;앤드류 수커;이지혜;이경도;김준
    • 한국농림기상학회지
    • /
    • 제22권1호
    • /
    • pp.26-46
    • /
    • 2020
  • 이 연구의 목표는 관개된 옥수수 밭에서의 복사, 에너지 및 엔트로피의 교환을 평가하고 문서화하는 것이다. 열역학적 관점에서, 우리는 이 농업생태계를 태양 복사로 인해 시스템 내부와 외부 사이에 큰 경도(gradient)가 부여되는 열린 열역학적 시스템으로 간주하였다. 따라서 시스템이 평형에서 멀어질 때, 열역학적 원칙에 따라 비평형 소산 과정(nonequilibrium dissipative process)인 이 생태-사회시스템이 모든 생물, 물리, 화학 및 인위적 구성 요소를 사용하여 태양으로부터 주어진 경도에 저항하여 이를 감소시키도록 움직인다고 가정하였다. 이 가설을 검증하기 위한 첫 단계로서 미국 네브라스카의 옥수수 밭에 위치한 AmeriFlux의 NE1 사이트에서 2003년부터 2014년까지 관측된 플럭스 및 미기상 자료를 사용하여 복사, 에너지 및 엔트로피의 교환을 정량화하였다. 12년 평균한 생장기간의 결과에 따르면, 시스템의 에너지 포획(순복사와 하향단파복사의 비, Rn/Rs↓)은 옥수수의 생장과 함께 증가하였고, 생장기간이 비생장기간보다 약 80% 높았다. 생장기간 동안 시스템 내의 엔트로피 생성(σ)은 평균 9.56 MJ m-2 K-1이었고, 주로 하향단파 복사에 의해 결정되었다. 엔트로피 수송(J)은 잠열플럭스, 순장파복사, 현열플럭스의 순으로 기여하였고, 시스템 외부 환경으로 퍼낸 양은 σ의 ~84%에 해당하는 -7.99 MJ m-2 K-1이었다. 따라서 매년 생장 기간동안 시스템 내에 순 축적된 엔트로피(dS/dt)는 1.57 MJ m-2 K-1이었다. 탄소 흡수 효율(CUE)은 1.25~1.62, 물 사용 효율(WUE)은 1.98~2.92 g C (kg H2O)-1이었고 모두 옥수수의 성장과 함께 증가하였다. 극심한 가뭄으로 관개가 더 빈번하게 행해진 2012년의 경우, σ와 J가 모두 평년보다 10% 많은 최대값을 보였고, 그 결과 서로 대부분 상쇄되어 dS/dt는 평년보다 조금 높은 수준에 머물렀다. 가뭄 중에도 빈번한 관개로 인해 엔트로피 수송의 주된 경로가 현열플럭스에서 잠열플럭스로 바뀌면서 생산량과 CUE는 평년 값을 웃돌았으나 물과 빛의 사용 효율은 오히려 낮아졌다. 이러한 결과에 근거하여 관개된 옥수수 생태-사회시스템의 지속가능성의 변화를 평가하기에는 아직 여러가지 문제가 남아있다. 자기-조직화 과정은 시스템과 주변 간의 경도를 효과적으로 감소시키는 역할을 한다. 따라서 엔트로피 자료와 함께, 지속가능성의 척도가 되는 자기-조직화 역량을 나타내는 스펙트랄 엔트로피, 또는 하부시스템의 구조 및 에너지·물질의 흐름의 강도와 방향의 변화를 가늠할 수 있는 역학적 과정망(dynamic process network) 분석 등의 추가 연구가 병행되어야 한다.

스마트폰 위치기반 어플리케이션의 이용의도에 영향을 미치는 요인: 프라이버시 계산 모형의 적용 (Factors Influencing the Adoption of Location-Based Smartphone Applications: An Application of the Privacy Calculus Model)

  • 차훈상
    • Asia pacific journal of information systems
    • /
    • 제22권4호
    • /
    • pp.7-29
    • /
    • 2012
  • Smartphone and its applications (i.e. apps) are increasingly penetrating consumer markets. According to a recent report from Korea Communications Commission, nearly 50% of mobile subscribers in South Korea are smartphone users that accounts for over 25 million people. In particular, the importance of smartphone has risen as a geospatially-aware device that provides various location-based services (LBS) equipped with GPS capability. The popular LBS include map and navigation, traffic and transportation updates, shopping and coupon services, and location-sensitive social network services. Overall, the emerging location-based smartphone apps (LBA) offer significant value by providing greater connectivity, personalization, and information and entertainment in a location-specific context. Conversely, the rapid growth of LBA and their benefits have been accompanied by concerns over the collection and dissemination of individual users' personal information through ongoing tracking of their location, identity, preferences, and social behaviors. The majority of LBA users tend to agree and consent to the LBA provider's terms and privacy policy on use of location data to get the immediate services. This tendency further increases the potential risks of unprotected exposure of personal information and serious invasion and breaches of individual privacy. To address the complex issues surrounding LBA particularly from the user's behavioral perspective, this study applied the privacy calculus model (PCM) to explore the factors that influence the adoption of LBA. According to PCM, consumers are engaged in a dynamic adjustment process in which privacy risks are weighted against benefits of information disclosure. Consistent with the principal notion of PCM, we investigated how individual users make a risk-benefit assessment under which personalized service and locatability act as benefit-side factors and information privacy risks act as a risk-side factor accompanying LBA adoption. In addition, we consider the moderating role of trust on the service providers in the prohibiting effects of privacy risks on user intention to adopt LBA. Further we include perceived ease of use and usefulness as additional constructs to examine whether the technology acceptance model (TAM) can be applied in the context of LBA adoption. The research model with ten (10) hypotheses was tested using data gathered from 98 respondents through a quasi-experimental survey method. During the survey, each participant was asked to navigate the website where the experimental simulation of a LBA allows the participant to purchase time-and-location sensitive discounted tickets for nearby stores. Structural equations modeling using partial least square validated the instrument and the proposed model. The results showed that six (6) out of ten (10) hypotheses were supported. On the subject of the core PCM, H2 (locatability ${\rightarrow}$ intention to use LBA) and H3 (privacy risks ${\rightarrow}$ intention to use LBA) were supported, while H1 (personalization ${\rightarrow}$ intention to use LBA) was not supported. Further, we could not any interaction effects (personalization X privacy risks, H4 & locatability X privacy risks, H5) on the intention to use LBA. In terms of privacy risks and trust, as mentioned above we found the significant negative influence from privacy risks on intention to use (H3), but positive influence from trust, which supported H6 (trust ${\rightarrow}$ intention to use LBA). The moderating effect of trust on the negative relationship between privacy risks and intention to use LBA was tested and confirmed by supporting H7 (privacy risks X trust ${\rightarrow}$ intention to use LBA). The two hypotheses regarding to the TAM, including H8 (perceived ease of use ${\rightarrow}$ perceived usefulness) and H9 (perceived ease of use ${\rightarrow}$ intention to use LBA) were supported; however, H10 (perceived effectiveness ${\rightarrow}$ intention to use LBA) was not supported. Results of this study offer the following key findings and implications. First the application of PCM was found to be a good analysis framework in the context of LBA adoption. Many of the hypotheses in the model were confirmed and the high value of $R^2$ (i.,e., 51%) indicated a good fit of the model. In particular, locatability and privacy risks are found to be the appropriate PCM-based antecedent variables. Second, the existence of moderating effect of trust on service provider suggests that the same marginal change in the level of privacy risks may differentially influence the intention to use LBA. That is, while the privacy risks increasingly become important social issues and will negatively influence the intention to use LBA, it is critical for LBA providers to build consumer trust and confidence to successfully mitigate this negative impact. Lastly, we could not find sufficient evidence that the intention to use LBA is influenced by perceived usefulness, which has been very well supported in most previous TAM research. This may suggest that more future research should examine the validity of applying TAM and further extend or modify it in the context of LBA or other similar smartphone apps.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1993년도 Fifth International Fuzzy Systems Association World Congress 93
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF