• Title/Summary/Keyword: Information Error

Search Result 11,165, Processing Time 0.042 seconds

Scalable Collaborative Filtering Technique based on Adaptive Clustering (적응형 군집화 기반 확장 용이한 협업 필터링 기법)

  • Lee, O-Joun;Hong, Min-Sung;Lee, Won-Jin;Lee, Jae-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.73-92
    • /
    • 2014
  • An Adaptive Clustering-based Collaborative Filtering Technique was proposed to solve the fundamental problems of collaborative filtering, such as cold-start problems, scalability problems and data sparsity problems. Previous collaborative filtering techniques were carried out according to the recommendations based on the predicted preference of the user to a particular item using a similar item subset and a similar user subset composed based on the preference of users to items. For this reason, if the density of the user preference matrix is low, the reliability of the recommendation system will decrease rapidly. Therefore, the difficulty of creating a similar item subset and similar user subset will be increased. In addition, as the scale of service increases, the time needed to create a similar item subset and similar user subset increases geometrically, and the response time of the recommendation system is then increased. To solve these problems, this paper suggests a collaborative filtering technique that adapts a condition actively to the model and adopts the concepts of a context-based filtering technique. This technique consists of four major methodologies. First, items are made, the users are clustered according their feature vectors, and an inter-cluster preference between each item cluster and user cluster is then assumed. According to this method, the run-time for creating a similar item subset or user subset can be economized, the reliability of a recommendation system can be made higher than that using only the user preference information for creating a similar item subset or similar user subset, and the cold start problem can be partially solved. Second, recommendations are made using the prior composed item and user clusters and inter-cluster preference between each item cluster and user cluster. In this phase, a list of items is made for users by examining the item clusters in the order of the size of the inter-cluster preference of the user cluster, in which the user belongs, and selecting and ranking the items according to the predicted or recorded user preference information. Using this method, the creation of a recommendation model phase bears the highest load of the recommendation system, and it minimizes the load of the recommendation system in run-time. Therefore, the scalability problem and large scale recommendation system can be performed with collaborative filtering, which is highly reliable. Third, the missing user preference information is predicted using the item and user clusters. Using this method, the problem caused by the low density of the user preference matrix can be mitigated. Existing studies on this used an item-based prediction or user-based prediction. In this paper, Hao Ji's idea, which uses both an item-based prediction and user-based prediction, was improved. The reliability of the recommendation service can be improved by combining the predictive values of both techniques by applying the condition of the recommendation model. By predicting the user preference based on the item or user clusters, the time required to predict the user preference can be reduced, and missing user preference in run-time can be predicted. Fourth, the item and user feature vector can be made to learn the following input of the user feedback. This phase applied normalized user feedback to the item and user feature vector. This method can mitigate the problems caused by the use of the concepts of context-based filtering, such as the item and user feature vector based on the user profile and item properties. The problems with using the item and user feature vector are due to the limitation of quantifying the qualitative features of the items and users. Therefore, the elements of the user and item feature vectors are made to match one to one, and if user feedback to a particular item is obtained, it will be applied to the feature vector using the opposite one. Verification of this method was accomplished by comparing the performance with existing hybrid filtering techniques. Two methods were used for verification: MAE(Mean Absolute Error) and response time. Using MAE, this technique was confirmed to improve the reliability of the recommendation system. Using the response time, this technique was found to be suitable for a large scaled recommendation system. This paper suggested an Adaptive Clustering-based Collaborative Filtering Technique with high reliability and low time complexity, but it had some limitations. This technique focused on reducing the time complexity. Hence, an improvement in reliability was not expected. The next topic will be to improve this technique by rule-based filtering.

Research Trend Analysis Using Bibliographic Information and Citations of Cloud Computing Articles: Application of Social Network Analysis (클라우드 컴퓨팅 관련 논문의 서지정보 및 인용정보를 활용한 연구 동향 분석: 사회 네트워크 분석의 활용)

  • Kim, Dongsung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.195-211
    • /
    • 2014
  • Cloud computing services provide IT resources as services on demand. This is considered a key concept, which will lead a shift from an ownership-based paradigm to a new pay-for-use paradigm, which can reduce the fixed cost for IT resources, and improve flexibility and scalability. As IT services, cloud services have evolved from early similar computing concepts such as network computing, utility computing, server-based computing, and grid computing. So research into cloud computing is highly related to and combined with various relevant computing research areas. To seek promising research issues and topics in cloud computing, it is necessary to understand the research trends in cloud computing more comprehensively. In this study, we collect bibliographic information and citation information for cloud computing related research papers published in major international journals from 1994 to 2012, and analyzes macroscopic trends and network changes to citation relationships among papers and the co-occurrence relationships of key words by utilizing social network analysis measures. Through the analysis, we can identify the relationships and connections among research topics in cloud computing related areas, and highlight new potential research topics. In addition, we visualize dynamic changes of research topics relating to cloud computing using a proposed cloud computing "research trend map." A research trend map visualizes positions of research topics in two-dimensional space. Frequencies of key words (X-axis) and the rates of increase in the degree centrality of key words (Y-axis) are used as the two dimensions of the research trend map. Based on the values of the two dimensions, the two dimensional space of a research map is divided into four areas: maturation, growth, promising, and decline. An area with high keyword frequency, but low rates of increase of degree centrality is defined as a mature technology area; the area where both keyword frequency and the increase rate of degree centrality are high is defined as a growth technology area; the area where the keyword frequency is low, but the rate of increase in the degree centrality is high is defined as a promising technology area; and the area where both keyword frequency and the rate of degree centrality are low is defined as a declining technology area. Based on this method, cloud computing research trend maps make it possible to easily grasp the main research trends in cloud computing, and to explain the evolution of research topics. According to the results of an analysis of citation relationships, research papers on security, distributed processing, and optical networking for cloud computing are on the top based on the page-rank measure. From the analysis of key words in research papers, cloud computing and grid computing showed high centrality in 2009, and key words dealing with main elemental technologies such as data outsourcing, error detection methods, and infrastructure construction showed high centrality in 2010~2011. In 2012, security, virtualization, and resource management showed high centrality. Moreover, it was found that the interest in the technical issues of cloud computing increases gradually. From annual cloud computing research trend maps, it was verified that security is located in the promising area, virtualization has moved from the promising area to the growth area, and grid computing and distributed system has moved to the declining area. The study results indicate that distributed systems and grid computing received a lot of attention as similar computing paradigms in the early stage of cloud computing research. The early stage of cloud computing was a period focused on understanding and investigating cloud computing as an emergent technology, linking to relevant established computing concepts. After the early stage, security and virtualization technologies became main issues in cloud computing, which is reflected in the movement of security and virtualization technologies from the promising area to the growth area in the cloud computing research trend maps. Moreover, this study revealed that current research in cloud computing has rapidly transferred from a focus on technical issues to for a focus on application issues, such as SLAs (Service Level Agreements).

Quality Assurance of Patients for Intensity Modulated Radiation Therapy (세기조절방사선치료(IMRT) 환자의 QA)

  • Yoon Sang Min;Yi Byong Yong;Choi Eun Kyung;Kim Jong Hoon;Ahn Seung Do;Lee Sang-Wook
    • Radiation Oncology Journal
    • /
    • v.20 no.1
    • /
    • pp.81-90
    • /
    • 2002
  • Purpose : To establish and verify the proper and the practical IMRT (Intensity--modulated radiation therapy) patient QA (Quality Assurance). Materials and Methods : An IMRT QA which consists of 3 steps and 16 items were designed and examined the validity of the program by applying to 9 patients, 12 IMRT cases of various sites. The three step OA program consists of RTP related QA, treatment information flow QA, and a treatment delivery QA procedure. The evaluation of organ constraints, the validity of the point dose, and the dose distribution are major issues in the RTP related QA procedure. The leaf sequence file generation, the evaluation of the MLC control file, the comparison of the dry run film, and the IMRT field simulate image were included in the treatment information flow procedure QA. The patient setup QA, the verification of the IMRT treatment fields to the patients, and the examination of the data in the Record & Verify system make up the treatment delivery QA procedure. Results : The point dose measurement results of 10 cases showed good agreement with the RTP calculation within $3\%$. One case showed more than a $3\%$ difference and the other case showed more than $5\%$, which was out side the tolerance level. We could not find any differences of more than 2 mm between the RTP leaf sequence and the dry run film. Film dosimetry and the dose distribution from the phantom plan showed the same tendency, but quantitative analysis was not possible because of the film dosimetry nature. No error had been found from the MLC control file and one mis-registration case was found before treatment. Conclusion : This study shows the usefulness and the necessity of the IMRT patient QA program. The whole procedure of this program should be peformed, especially by institutions that have just started to accumulate experience. But, the program is too complex and time consuming. Therefore, we propose practical and essential QA items for institutions in which the IMRT is performed as a routine procedure.

DISEASE DIAGNOSED AND DESCRIBED BY NIRS

  • Tsenkova, Roumiana N.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1031-1031
    • /
    • 2001
  • The mammary gland is made up of remarkably sensitive tissue, which has the capability of producing a large volume of secretion, milk, under normal or healthy conditions. When bacteria enter the gland and establish an infection (mastitis), inflammation is initiated accompanied by an influx of white cells from the blood stream, by altered secretory function, and changes in the volume and composition of secretion. Cell numbers in milk are closely associated with inflammation and udder health. These somatic cell counts (SCC) are accepted as the international standard measurement of milk quality in dairy and for mastitis diagnosis. NIR Spectra of unhomogenized composite milk samples from 14 cows (healthy and mastitic), 7days after parturition and during the next 30 days of lactation were measured. Different multivariate analysis techniques were used to diagnose the disease at very early stage and determine how the spectral properties of milk vary with its composition and animal health. PLS model for prediction of somatic cell count (SCC) based on NIR milk spectra was made. The best accuracy of determination for the 1100-2500nm range was found using smoothed absorbance data and 10 PLS factors. The standard error of prediction for independent validation set of samples was 0.382, correlation coefficient 0.854 and the variation coefficient 7.63%. It has been found that SCC determination by NIR milk spectra was indirect and based on the related changes in milk composition. From the spectral changes, we learned that when mastitis occurred, the most significant factors that simultaneously influenced milk spectra were alteration of milk proteins and changes in ionic concentration of milk. It was consistent with the results we obtained further when applied 2DCOS. Two-dimensional correlation analysis of NIR milk spectra was done to assess the changes in milk composition, which occur when somatic cell count (SCC) levels vary. The synchronous correlation map revealed that when SCC increases, protein levels increase while water and lactose levels decrease. Results from the analysis of the asynchronous plot indicated that changes in water and fat absorptions occur before other milk components. In addition, the technique was used to assess the changes in milk during a period when SCC levels do not vary appreciably. Results indicated that milk components are in equilibrium and no appreciable change in a given component was seen with respect to another. This was found in both healthy and mastitic animals. However, milk components were found to vary with SCC content regardless of the range considered. This important finding demonstrates that 2-D correlation analysis may be used to track even subtle changes in milk composition in individual cows. To find out the right threshold for SCC when used for mastitis diagnosis at cow level, classification of milk samples was performed using soft independent modeling of class analogy (SIMCA) and different spectral data pretreatment. Two levels of SCC - 200 000 cells/$m\ell$ and 300 000 cells/$m\ell$, respectively, were set up and compared as thresholds to discriminate between healthy and mastitic cows. The best detection accuracy was found with 200 000 cells/$m\ell$ as threshold for mastitis and smoothed absorbance data: - 98% of the milk samples in the calibration set and 87% of the samples in the independent test set were correctly classified. When the spectral information was studied it was found that the successful mastitis diagnosis was based on reviling the spectral changes related to the corresponding changes in milk composition. NIRS combined with different ways of spectral data ruining can provide faster and nondestructive alternative to current methods for mastitis diagnosis and a new inside into disease understanding at molecular level.

  • PDF

A Study on the Early North Sung Period Buddhist Literatures Found in the Pagoda of Suzhou Ruiguangsi (소주(蘇州) 서광사탑(瑞光寺塔) 출토(出土) 북송초기(北宋初期)의 불교문헌(佛敎文獻) 연구(硏究))

  • Song, Il-Gie
    • Journal of Korean Library and Information Science Society
    • /
    • v.45 no.1
    • /
    • pp.81-102
    • /
    • 2014
  • In 1978, there was an investigation before the repair of the pagoda in Suzhou Ruiguangsi (蘇州 瑞光寺) and many Buddhist literatures were found in the center of pagoda's 3rd floor. This study is the analysis of the forms and values of the literatures. Since there were 123 ea of precious literatures made from Tang (唐) period to early North Sung (北宋) period among the found Buddhist literatures, they have very important meaning in the bibliography for the time. Suzhou Reiguangsi (蘇州 瑞光寺) was built by the first king of Wu (吳), Sun Quan (孫權). He built this Buddhist temple to meet Monk Xingkang (性康) from Kangjuguo (康居國). When it had been first built, it had been called Puji Chanyuan (普濟禪院) and it was renamed as current Ruiguangsi (瑞光寺) after the major expansion in the early period of North Sung (北宋). The Ruiguangta (瑞光塔) was built by Sun Quan (孫權) in A.D. 247 immediately after the temple had been built. Sun Quan built this pagoda as a 13-floor pagoda to pray for the easy passage into eternity of his mother, national prosperity and welfare of the people. As time passed by, the pagoda was largely damaged and it was newly built in A.D. 1017 (天禧 1) of early North Sung (北宋) period; while it was named as Duobaota (多寶塔). The literatures found in Ruiguangta consist of 107 ea of 3 sets dharani (陀羅尼) scripture and 16 volumes of 5 books, total 123 ea. Especially, there were 7 books of full set transcript of Lotus Sutra (法華經) in relatively complete form. This sutra written in gilt lettering on dark blue paper was made in Middle Tang (中唐) period and it is believed to be the only one existing in East Asia as a scripture written in gilt lettering on dark blue paper (紺紙金字寫經). There were also 6 books of small letter edition of Lotus Sutra (法華經) in complete form, which was published during the early North Sung (北宋) period. This specific edition is incorrectly stated in most general reference books published in China as having been engraved in early Tang period (初唐) since a Japanese scholar wrongly introduced it as having been engraved together with Nakamura edition (中村本). It is meaningful that this error can be corrected by the finding of this study.

An Electrical Conductivity Reconstruction for Evaluating Bone Mineral Density : Simulation (골 밀도 평가를 위한 뼈의 전기 전도도 재구성: 시뮬레이션)

  • 최민주;김민찬;강관석;최흥호
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.4
    • /
    • pp.261-268
    • /
    • 2004
  • Osteoporosis is a clinical condition in which the amount of bone tissue is reduced and the likelihood of fracture is increased. It is known that the electrical property of the bone is related to its density, and, in particular, the electrical resistance of the bone decreases as the bone loss increases. This implies that the electrical property of bone may be an useful parameter to diagnose osteoporosis, provided that it can be readily measured. The study attempted to evaluate the electrical conductivity of bone using a technique of electrical impedance tomography (EIT). It nay not be easy in general to get an EIT for the bone due to the big difference (an order of 2) of electrical properties between the bone and the surrounding soft tissue. In the present study, we took an adaptive mesh regeneration technique originally developed for the detection of two phase boundaries and modified it to be able to reconstruct the electrical conductivity inside the boundary provided that the geometry of the boundary was given. Numerical simulation was carried out for a tibia phantom, circular cylindrical phantom (radius of 40 mm) inside of which there is an ellipsoidal homeogenous tibia bone (short and long radius are 17 mm and 15 mm, respectively) surrounded by the soft tissue. The bone was located in the 15 mm above from the center of the circular cross section of the phantom. The electrical conductivity of the soft tissue was set to be 4 mS/cm and varies from 0.01 to 1 ms/cm for the bone. The simulation considered measurement errors in order to look into its effects. The simulated results showed that, if the measurement error was maintained less than 5 %, the reconstructed electrical conductivity of the bone was within 10 % errors. The accuracy increased with the electrical conductivity of the bone, as expected. This indicates that the present technique provides more accurate information for osteoporotic bones. It should be noted that tile simulation is based on a simple two phase image for the bone and the surrounding soft tissue when its anatomical information is provided. Nevertheless, the study indicates the possibility that the EIT technique may be used as a new means to detect the bone loss leading to osteoporotic fractures.

Dispute of Part-Whole Representation in Conceptual Modeling (부분-전체 관계에 관한 개념적 모델링의 논의에 관하여)

  • Kim, Taekyung;Park, Jinsoo;Rho, Sangkyu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.97-116
    • /
    • 2012
  • Conceptual modeling is an important step for successful system development. It helps system designers and business practitioners share the same view on domain knowledge. If the work is successful, a result of conceptual modeling can be beneficial in increasing productivity and reducing failures. However, the value of conceptual modeling is unlikely to be evaluated uniformly because we are lack of agreement on how to elicit concepts and how to represent those with conceptual modeling constructs. Especially, designing relationships between components, also known as part-whole relationships, have been regarded as complicated work. The recent study, "Representing Part-Whole Relations in Conceptual Modeling : An Empirical Evaluation" (Shanks et al., 2008), published in MIS Quarterly, can be regarded as one of positive efforts. Not only the study is one of few attempts of trying to clarify how to select modeling alternatives in part-whole design, but also it shows results based on an empirical experiment. Shanks et al. argue that there are two modeling alternatives to represent part-whole relationships : an implicit representation and an explicit one. By conducting an experiment, they insist that the explicit representation increases the value of a conceptual model. Moreover, Shanks et al. justify their findings by citing the BWW ontology. Recently, the study from Shanks et al. faces criticism. Allen and March (2012) argue that Shanks et al.'s experiment is lack of validity and reliability since the experimental setting suffers from error-prone and self-defensive design. They point out that the experiment is intentionally fabricated to support the idea, as such that using concrete UML concepts results in positive results in understanding models. Additionally, Allen and March add that the experiment failed to consider boundary conditions; thus reducing credibility. Shanks and Weber (2012) contradict flatly the argument suggested by Allen and March (2012). To defend, they posit the BWW ontology is righteously applied in supporting the research. Moreover, the experiment, they insist, can be fairly acceptable. Therefore, Shanks and Weber argue that Allen and March distort the true value of Shanks et al. by pointing out minor limitations. In this study, we try to investigate the dispute around Shanks et al. in order to answer to the following question : "What is the proper value of the study conducted by Shanks et al.?" More profoundly, we question whether or not using the BWW ontology can be the only viable option of exploring better conceptual modeling methods and procedures. To understand key issues around the dispute, first we reviewed previous studies relating to the BWW ontology. We critically reviewed both of Shanks and Weber and Allen and March. With those findings, we further discuss theories on part-whole (or part-of) relationships that are rarely treated in the dispute. As a result, we found three additional evidences that are not sufficiently covered by the dispute. The main focus of the dispute is on the errors of experimental methods: Shanks et al. did not use Bunge's Ontology properly; the refutation of a paradigm shift is lack of concrete, logical rationale; the conceptualization on part-whole relations should be reformed. Conclusively, Allen and March indicate properly issues that weaken the value of Shanks et al. In general, their criticism is reasonable; however, they do not provide sufficient answers how to anchor future studies on part-whole relationships. We argue that the use of the BWW ontology should be rigorously evaluated by its original philosophical rationales surrounding part-whole existence. Moreover, conceptual modeling on the part-whole phenomena should be investigated with more plentiful lens of alternative theories. The criticism on Shanks et al. should not be regarded as a contradiction on evaluating modeling methods of alternative part-whole representations. To the contrary, it should be viewed as a call for research on usable and useful approaches to increase value of conceptual modeling.

Monitoring of Atmospheric Aerosol using GMS-5 Satellite Remote Sensing Data (GMS-5 인공위성 원격탐사 자료를 이용한 대기 에어러솔 모니터링)

  • Lee, Kwon Ho;Kim, Jeong Eun;Kim, Young Jun;Suh, Aesuk;Ahn, Myung Hwan
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.5 no.2
    • /
    • pp.1-15
    • /
    • 2002
  • Atmospheric aerosols interact with sunlight and affect the global radiation balance that can cause climate change through direct and indirect radiative forcing. Because of the spatial and temporal uncertainty of aerosols in atmosphere, aerosol characteristics are not considered through GCMs (General Circulation Model). Therefor it is important physical and optical characteristics should be evaluated to assess climate change and radiative effect by atmospheric aerosols. In this study GMS-5 satellite data and surface measurement data were analyzed using a radiative transfer model for the Yellow Sand event of April 7~8, 2000 in order to investigate the atmospheric radiative effects of Yellow Sand aerosols, MODTRAN3 simulation results enable to inform the relation between satellite channel albedo and aerosol optical thickness(AOT). From this relation AOT was retreived from GMS-5 visible channel. The variance observations of satellite images enable remote sensing of the Yellow Sand particles. Back trajectory analysis was performed to track the air mass from the Gobi desert passing through Korean peninsular with high AOT value measured by ground based measurement. The comparison GMS-5 AOT to ground measured RSR aerosol optical depth(AOD) show that for Yellow Sand aerosols, the albedo measured over ocean surfaces can be used to obtain the aerosol optical thickness using appropriate aerosol model within an error of about 10%. In addition, LIDAR network measurements and backward trajectory model showed characteristics and appearance of Yellow Sand during Yellow Sand events. These data will be good supporting for monitoring of Yellow Sand aerosols.

  • PDF

The Applicant's Liability of Examination of Document and Notification of the Discrepancies in Credit Transaction (신용장거래에 있어서 개설의뢰인의 서류심사 및 통지의무)

  • Park, Kyu-Young
    • International Commerce and Information Review
    • /
    • v.8 no.4
    • /
    • pp.105-121
    • /
    • 2006
  • This study is related with the judgements of our country's supremcourt against the transaction of Letter of Credit which is beneficiary's fraudulent trade deal. In this case I think to analyse the judgements and to present the basic grounds on which the judgements were established. In Letter of Credit transaction, there are the major parties, such as, beneficiary, issuing bank, or confirming bank and the other parties such as applicant, negotiating bank, advising bank and paying bank. Therefore, in this cases, the beneficiary, the French Weapons' Supplier who did not shipped the commodities, created the false Bill of Lading, let his dealing bank make payment against the documents presented by him and received the proceeds from the negotiating bank or collecting bank, thereafter was bankrupted and escaped. For the first time, even though the issuing bank conceived that the presented documents were inconsistent with the terms of L/C. it did not received the payment approval from the applicant against all the discrepancies, made the negotiating bank pay the proceeds to exporter and thereafter, delivered the documents to the applicant long after the time of the issuing bank's examination of documents. The applicant who received the documents from the issuing bank, instantly did not examine the documents and inform to the issuing bank whether he accepted the documents or not. Long time after, applicant tried to clear the goods through custom when he knew the bill of ladings were false and founded out the documents had the other discrepancies which he did not approved. As the results, the applicant, Korea Army Transportation Command claimed, that the issuing bank must refund his paid amount because issuing bank examined the documents unreasonably according to u.c.p 500 Act 13th, 14th. In spite of the applicant's claim, the issuing bank argued that it paid the proceeds of L/C reasonably after receiving the applicant's approval of an discrepancy of document, the delayed shipment, but for concerning the other discrepancies, the trivial ones, the applicant did not examined the document and noticed the discrepancies in reasonable time. Therefore the applicant sued the issuing bank for refunding it's paid proceeds of L/C. Originally, this cases were risen between Korea Exchange Bank and Korea Army Transportation Command. As result of analysing the case, the contents of the case case have had same procedure actually, but the lower courts, the district and high courts all judged the issuing bank was reasonable and did not make an error. As analysing these supreme court's judgements, the problem is that whether there are the applicant's liability of examining the documents and informing its discrepancies to the issuing bank or not, and if the applicant broke such a liabilities, it lost the right of claiming the repayment from issuing bank. Finally to say, such applicant's liabilities only must be existed in case the documents arrived to the issuing bank was delivered to the applicant within the time of the documents examination according to u.c.p 500 Act 14, d. i. But if any the documents were delivered to applicant after time of the documents examination, the applicant had not such liabilities because eventhough after those time the applicant would have informed to the issuing bank the discrepancies of documents, the issuing bank couldn't receive repayment of its paid proceeds of document from the negotiating bank. In the result after time of issuing bank's examination of documents, it is considered that there's no actual benefit to ask the applicant practice it's liability. Therefore finally to say. I concluded that the Suprem Court's judgement was much more reasonable. In the following, the judgements of the supreme court would be analysed more concretely, the basic reasons of the results be explained and the way of protecting such L/C transaction would be presented.

  • PDF

Estimation of Surface Solar Radiation using Ground-based Remote Sensing Data on the Seoul Metropolitan Area (수도권지역의 지상기반 원격탐사자료를 이용한 지표면 태양에너지 산출)

  • Jee, Joon-Bum;Min, Jae-Sik;Lee, Hankyung;Chae, Jung-Hoon;Kim, Sangil
    • Journal of the Korean earth science society
    • /
    • v.39 no.3
    • /
    • pp.228-240
    • /
    • 2018
  • Solar energy is calculated using meteorological (14 station), ceilometer (2 station) and microwave radiometer (MWR, 7 station)) data observed from the Weather Information Service Engine (WISE) on the Seoul metropolitan area. The cloud optical thickness and the cloud fraction are calculated using the back-scattering coefficient (BSC) of the ceilometer and liquid water path of the MWR. The solar energy on the surface is calculated using solar radiation model with cloud fraction from the ceilometer and the MWR. The estimated solar energy is underestimated compared to observations both at Jungnang and Gwanghwamun stations. In linear regression analysis, the slope is less than 0.8 and the bias is negative which is less than $-20W/m^2$. The estimated solar energy using MWR is more improved (i.e., deterministic coefficient (average $R^2=0.8$) and Root Mean Square Error (average $RMSE=110W/m^2$)) than when using ceilometer. The monthly cloud fraction and solar energy calculated by ceilometer is greater than 0.09 and lower than $50W/m^2$ compared to MWR. While there is a difference depending on the locations, RMSE of estimated solar radiation is large over $50W/m^2$ in July and September compared to other months. As a result, the estimation of a daily accumulated solar radiation shows the highest correlation at Gwanghwamun ($R^2=0.80$, RMSE=2.87 MJ/day) station and the lowest correlation at Gooro ($R^2=0.63$, RMSE=4.77 MJ/day) station.