• Title/Summary/Keyword: 성능평가기법

Search Result 4,411, Processing Time 0.033 seconds

Estimation for Ground Air Temperature Using GEO-KOMPSAT-2A and Deep Neural Network (심층신경망과 천리안위성 2A호를 활용한 지상기온 추정에 관한 연구)

  • Taeyoon Eom;Kwangnyun Kim;Yonghan Jo;Keunyong Song;Yunjeong Lee;Yun Gon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.207-221
    • /
    • 2023
  • This study suggests deep neural network models for estimating air temperature with Level 1B (L1B) datasets of GEO-KOMPSAT-2A (GK-2A). The temperature at 1.5 m above the ground impact not only daily life but also weather warnings such as cold and heat waves. There are many studies to assume the air temperature from the land surface temperature (LST) retrieved from satellites because the air temperature has a strong relationship with the LST. However, an algorithm of the LST, Level 2 output of GK-2A, works only clear sky pixels. To overcome the cloud effects, we apply a deep neural network (DNN) model to assume the air temperature with L1B calibrated for radiometric and geometrics from raw satellite data and compare the model with a linear regression model between LST and air temperature. The root mean square errors (RMSE) of the air temperature for model outputs are used to evaluate the model. The number of 95 in-situ air temperature data was 2,496,634 and the ratio of datasets paired with LST and L1B show 42.1% and 98.4%. The training years are 2020 and 2021 and 2022 is used to validate. The DNN model is designed with an input layer taking 16 channels and four hidden fully connected layers to assume an air temperature. As a result of the model using 16 bands of L1B, the DNN with RMSE 2.22℃ showed great performance than the baseline model with RMSE 3.55℃ on clear sky conditions and the total RMSE including overcast samples was 3.33℃. It is suggested that the DNN is able to overcome cloud effects. However, it showed different characteristics in seasonal and hourly analysis and needed to append solar information as inputs to make a general DNN model because the summer and winter seasons showed a low coefficient of determinations with high standard deviations.

Performance Evaluation of Monitoring System for Sargassum horneri Using GOCI-II: Focusing on the Results of Removing False Detection in the Yellow Sea and East China Sea (GOCI-II 기반 괭생이모자반 모니터링 시스템 성능 평가: 황해 및 동중국해 해역 오탐지 제거 결과를 중심으로)

  • Han-bit Lee;Ju-Eun Kim;Moon-Seon Kim;Dong-Su Kim;Seung-Hwan Min;Tae-Ho Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1615-1633
    • /
    • 2023
  • Sargassum horneri is one of the floating algae in the sea, which breeds in large quantities in the Yellow Sea and East China Sea and then flows into the coast of Republic of Korea, causing various problems such as destroying the environment and damaging fish farms. In order to effectively prevent damage and preserve the coastal environment, the development of Sargassum horneri detection algorithms using satellite-based remote sensing technology has been actively developed. However, incorrect detection information causes an increase in the moving distance of ships collecting Sargassum horneri and confusion in the response of related local governments or institutions,so it is very important to minimize false detections when producing Sargassum horneri spatial information. This study applied technology to automatically remove false detection results using the GOCI-II-based Sargassum horneri detection algorithm of the National Ocean Satellite Center (NOSC) of the Korea Hydrographic and Oceanography Agency (KHOA). Based on the results of analyzing the causes of major false detection results, it includes a process of removing linear and sporadic false detections and green algae that occurs in large quantities along the coast of China in spring and summer by considering them as false detections. The technology to automatically remove false detection was applied to the dates when Sargassum horneri occurred from February 24 to June 25, 2022. Visual assessment results were generated using mid-resolution satellite images, qualitative and quantitative evaluations were performed. Linear false detection results were completely removed, and most of the sporadic and green algae false detection results that affected the distribution were removed. Even after the automatic false detection removal process, it was possible to confirm the distribution area of Sargassum horneri compared to the visual assessment results, and the accuracy and precision calculated using the binary classification model averaged 97.73% and 95.4%, respectively. Recall value was very low at 29.03%, which is presumed to be due to the effect of Sargassum horneri movement due to the observation time discrepancy between GOCI-II and mid-resolution satellite images, differences in spatial resolution, location deviation by orthocorrection, and cloud masking. The results of this study's removal of false detections of Sargassum horneri can determine the spatial distribution status in near real-time, but there are limitations in accurately estimating biomass. Therefore, continuous research on upgrading the Sargassum horneri monitoring system must be conducted to use it as data for establishing future Sargassum horneri response plans.

Extension Method of Association Rules Using Social Network Analysis (사회연결망 분석을 활용한 연관규칙 확장기법)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.111-126
    • /
    • 2017
  • Recommender systems based on association rule mining significantly contribute to seller's sales by reducing consumers' time to search for products that they want. Recommendations based on the frequency of transactions such as orders can effectively screen out the products that are statistically marketable among multiple products. A product with a high possibility of sales, however, can be omitted from the recommendation if it records insufficient number of transactions at the beginning of the sale. Products missing from the associated recommendations may lose the chance of exposure to consumers, which leads to a decline in the number of transactions. In turn, diminished transactions may create a vicious circle of lost opportunity to be recommended. Thus, initial sales are likely to remain stagnant for a certain period of time. Products that are susceptible to fashion or seasonality, such as clothing, may be greatly affected. This study was aimed at expanding association rules to include into the list of recommendations those products whose initial trading frequency of transactions is low despite the possibility of high sales. The particular purpose is to predict the strength of the direct connection of two unconnected items through the properties of the paths located between them. An association between two items revealed in transactions can be interpreted as the interaction between them, which can be expressed as a link in a social network whose nodes are items. The first step calculates the centralities of the nodes in the middle of the paths that indirectly connect the two nodes without direct connection. The next step identifies the number of the paths and the shortest among them. These extracts are used as independent variables in the regression analysis to predict future connection strength between the nodes. The strength of the connection between the two nodes of the model, which is defined by the number of nodes between the two nodes, is measured after a certain period of time. The regression analysis results confirm that the number of paths between the two products, the distance of the shortest path, and the number of neighboring items connected to the products are significantly related to their potential strength. This study used actual order transaction data collected for three months from February to April in 2016 from an online commerce company. To reduce the complexity of analytics as the scale of the network grows, the analysis was performed only on miscellaneous goods. Two consecutively purchased items were chosen from each customer's transactions to obtain a pair of antecedent and consequent, which secures a link needed for constituting a social network. The direction of the link was determined in the order in which the goods were purchased. Except for the last ten days of the data collection period, the social network of associated items was built for the extraction of independent variables. The model predicts the number of links to be connected in the next ten days from the explanatory variables. Of the 5,711 previously unconnected links, 611 were newly connected for the last ten days. Through experiments, the proposed model demonstrated excellent predictions. Of the 571 links that the proposed model predicts, 269 were confirmed to have been connected. This is 4.4 times more than the average of 61, which can be found without any prediction model. This study is expected to be useful regarding industries whose new products launch quickly with short life cycles, since their exposure time is critical. Also, it can be used to detect diseases that are rarely found in the early stages of medical treatment because of the low incidence of outbreaks. Since the complexity of the social networking analysis is sensitive to the number of nodes and links that make up the network, this study was conducted in a particular category of miscellaneous goods. Future research should consider that this condition may limit the opportunity to detect unexpected associations between products belonging to different categories of classification.

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

Export Control System based on Case Based Reasoning: Design and Evaluation (사례 기반 지능형 수출통제 시스템 : 설계와 평가)

  • Hong, Woneui;Kim, Uihyun;Cho, Sinhee;Kim, Sansung;Yi, Mun Yong;Shin, Donghoon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.109-131
    • /
    • 2014
  • As the demand of nuclear power plant equipment is continuously growing worldwide, the importance of handling nuclear strategic materials is also increasing. While the number of cases submitted for the exports of nuclear-power commodity and technology is dramatically increasing, preadjudication (or prescreening to be simple) of strategic materials has been done so far by experts of a long-time experience and extensive field knowledge. However, there is severe shortage of experts in this domain, not to mention that it takes a long time to develop an expert. Because human experts must manually evaluate all the documents submitted for export permission, the current practice of nuclear material export is neither time-efficient nor cost-effective. Toward alleviating the problem of relying on costly human experts only, our research proposes a new system designed to help field experts make their decisions more effectively and efficiently. The proposed system is built upon case-based reasoning, which in essence extracts key features from the existing cases, compares the features with the features of a new case, and derives a solution for the new case by referencing similar cases and their solutions. Our research proposes a framework of case-based reasoning system, designs a case-based reasoning system for the control of nuclear material exports, and evaluates the performance of alternative keyword extraction methods (full automatic, full manual, and semi-automatic). A keyword extraction method is an essential component of the case-based reasoning system as it is used to extract key features of the cases. The full automatic method was conducted using TF-IDF, which is a widely used de facto standard method for representative keyword extraction in text mining. TF (Term Frequency) is based on the frequency count of the term within a document, showing how important the term is within a document while IDF (Inverted Document Frequency) is based on the infrequency of the term within a document set, showing how uniquely the term represents the document. The results show that the semi-automatic approach, which is based on the collaboration of machine and human, is the most effective solution regardless of whether the human is a field expert or a student who majors in nuclear engineering. Moreover, we propose a new approach of computing nuclear document similarity along with a new framework of document analysis. The proposed algorithm of nuclear document similarity considers both document-to-document similarity (${\alpha}$) and document-to-nuclear system similarity (${\beta}$), in order to derive the final score (${\gamma}$) for the decision of whether the presented case is of strategic material or not. The final score (${\gamma}$) represents a document similarity between the past cases and the new case. The score is induced by not only exploiting conventional TF-IDF, but utilizing a nuclear system similarity score, which takes the context of nuclear system domain into account. Finally, the system retrieves top-3 documents stored in the case base that are considered as the most similar cases with regard to the new case, and provides them with the degree of credibility. With this final score and the credibility score, it becomes easier for a user to see which documents in the case base are more worthy of looking up so that the user can make a proper decision with relatively lower cost. The evaluation of the system has been conducted by developing a prototype and testing with field data. The system workflows and outcomes have been verified by the field experts. This research is expected to contribute the growth of knowledge service industry by proposing a new system that can effectively reduce the burden of relying on costly human experts for the export control of nuclear materials and that can be considered as a meaningful example of knowledge service application.

Analysis of Stability Indexes for Lightning by Using Upper Air Observation Data over South Korea (남한에서 낙뢰발생시 근접 고층기상관측 자료를 이용한 안정도 지수 분석)

  • Eom, Hyo-Sik;Suh, Myoung-Seok
    • Atmosphere
    • /
    • v.20 no.4
    • /
    • pp.467-482
    • /
    • 2010
  • In this study, characteristics of various stability indexes (SI) and environmental parameters (EP) for the lightning are analysed by using 5 upper air observatories (Osan, Gwangju, Jeju, Pohang, and Baengnyeongdo) for the years 2002-2006 over South Korea. The analysed SI and EP are the lifted index, K-index, Showalter stability index, total precipitable water, mixing ratio, wind shear and temperature of lifting condensation level. The lightning data occurred on the range of -2 hr~+1 hr and within 100 km based on the launch time of rawinsonde and observing location are selected. In general, summer averaged temperature and mixing ratio of lower troposphere for the lightning cases are higher about 1 K and $1{\sim}2gkg^{-1}$ than no lightning cases, respectively. The Box-Whisker plot shows that the range of various SI and EP values for lightning and no lightning cases are well separated but overlapping of SI and EP values between lightning and no lightning are not a little. The optimized threshold values for the detection of lightning are determined objectively based on the highest Heidke skill socre (HSS), which is the most favorable validation parameter for the rare event, such as lightning, by using the simulation of SI and EP threshold values. Although the HSS is not high (0.15~0.30) and the number and values of selected SI and EP are dependent on geographic location, the new threshold values can be used as a supplementary tool for the detection or forecast of lightning over South Korea.

Design of Ultrasonic Nebulizer for Inhalation Toxicology Study of Cadmium with Application of Engineering Methodology and Performance Evaluation with Light-Scattering Photometer (공학적 기법을 응용한 카드뮴의 흡입독성 연구를 위한 초음파 네뷸라이져의 설계 그리고 광산란 광도계를 이용한 성능평가)

  • Jeung Jae Yeal;Milton Donald K.;Kim Tae Hyeung;Lee Jong Young;Chong Myoung Soo;Ko Kwang Jae;Kim Sang Duck;Kang Sung Ho;Song Young Sun;Lee Ki Nam
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.16 no.3
    • /
    • pp.464-471
    • /
    • 2002
  • Author applied several engineering methodologies to classical ultrasonic nebulizer to cope with it's demerits. After several trials and errors, we got the several meaningful results. To evaluate the modified ultrasonic nebulizer for inhalation toxicology of cadmium, author used light-scattering photometer. This paper is the one part of inhalation exposure systems for inhalation toxicology study of cadmium. According to the testing conditions, source temperature 50℃ and inlet-duct band temperature 150℃, aerosol generation results for sodium chloride and cadmium chloride were as followings: Coefficients of variation(CV) of sodium chloride and cadmium chloride for repeated trials were 3.38 and 4.77 for 10g, 2.47 and 5.02 for 5g, and 4.70 and 2.98 for 2.5g. All the CVs were within 10% of acceptance variability. Count Per Minute(CPM) changes of NaCl and CdCl₂ for 5 repeated trials were similar. CPM ratios of CdCl₂/NaCl were 1.13 for 10g, 0.76 for 5g, and 1.06 for 2.5g. Relative aerosol generation of cadmium chloride to sodium chloride was the highest in 10g. Efficiency increases of 24.50% for 5g NaCl, 14.91 % for 2.5g NaCl, and 16.48% for 2.5g CdCl₂ with respect to theoretical efficiency were observed but 0.04% efficiency decrease was observed in 5g CdC₂. According to the modifications of source temperature(20, 50, 70℃) and inlet-duct band temperature(20, 50, 100, 150, 200℃), aerosol generation results for NaCl and CdCl₂ were as followings: CPM trends for each quantity excepting 10g NaCl in inlet-duct band temperature 200℃ were similar, and the highest CPM was observed in source temperature 70℃ to each inlet-duct band temperature. The highest CPMs to 10, 5, and 2.5g NaCl were observed in source temperature 70℃ and inlet-duct band temperature 20℃. Aerosol generation of cadmium chloride was increased with the higher source temperature, excepting inlet-duct band temperature 200℃. The highest CPMs for 10, 5, and 2.5g CdCl₂ were observed in source temperature 70℃ and inlet-duct band temperature 20℃, and this trend was similar to NaCl aerosol generation The highest CPMs for 10, 5, and 2.5g CdCl₂ were observed in source temperature 70℃ and inlet-duct band temperature 20℃, and this result was similar to NaCl aerosol generation. Observed efficiencies of 5 and 2.5g NaCl were similar to ifs theoretical efficiency but -3.08% efficiency decrease of 5g CdCl₂, 17.47% efficiency increase of 2.5g CdCl₂ were observed. CPM ratio of CdCl₂/NaCl of 10g was different to 5 and 2.5g, and 2.5g ratio was higher than 5g ratio. In conclusion, to get maximum aerosol generation for NaCl and CdCl₂ will be the conditions that set the appropriate inlet-duct band temperature for each materials and increase the source temperature. Sodium chloride can be used to evaluate the performance and predict the concentration for cadmium aerosol in aerosol generator and inhalation exposure system.

A Methodology for Extracting Shopping-Related Keywords by Analyzing Internet Navigation Patterns (인터넷 검색기록 분석을 통한 쇼핑의도 포함 키워드 자동 추출 기법)

  • Kim, Mingyu;Kim, Namgyu;Jung, Inhwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.123-136
    • /
    • 2014
  • Recently, online shopping has further developed as the use of the Internet and a variety of smart mobile devices becomes more prevalent. The increase in the scale of such shopping has led to the creation of many Internet shopping malls. Consequently, there is a tendency for increasingly fierce competition among online retailers, and as a result, many Internet shopping malls are making significant attempts to attract online users to their sites. One such attempt is keyword marketing, whereby a retail site pays a fee to expose its link to potential customers when they insert a specific keyword on an Internet portal site. The price related to each keyword is generally estimated by the keyword's frequency of appearance. However, it is widely accepted that the price of keywords cannot be based solely on their frequency because many keywords may appear frequently but have little relationship to shopping. This implies that it is unreasonable for an online shopping mall to spend a great deal on some keywords simply because people frequently use them. Therefore, from the perspective of shopping malls, a specialized process is required to extract meaningful keywords. Further, the demand for automating this extraction process is increasing because of the drive to improve online sales performance. In this study, we propose a methodology that can automatically extract only shopping-related keywords from the entire set of search keywords used on portal sites. We define a shopping-related keyword as a keyword that is used directly before shopping behaviors. In other words, only search keywords that direct the search results page to shopping-related pages are extracted from among the entire set of search keywords. A comparison is then made between the extracted keywords' rankings and the rankings of the entire set of search keywords. Two types of data are used in our study's experiment: web browsing history from July 1, 2012 to June 30, 2013, and site information. The experimental dataset was from a web site ranking site, and the biggest portal site in Korea. The original sample dataset contains 150 million transaction logs. First, portal sites are selected, and search keywords in those sites are extracted. Search keywords can be easily extracted by simple parsing. The extracted keywords are ranked according to their frequency. The experiment uses approximately 3.9 million search results from Korea's largest search portal site. As a result, a total of 344,822 search keywords were extracted. Next, by using web browsing history and site information, the shopping-related keywords were taken from the entire set of search keywords. As a result, we obtained 4,709 shopping-related keywords. For performance evaluation, we compared the hit ratios of all the search keywords with the shopping-related keywords. To achieve this, we extracted 80,298 search keywords from several Internet shopping malls and then chose the top 1,000 keywords as a set of true shopping keywords. We measured precision, recall, and F-scores of the entire amount of keywords and the shopping-related keywords. The F-Score was formulated by calculating the harmonic mean of precision and recall. The precision, recall, and F-score of shopping-related keywords derived by the proposed methodology were revealed to be higher than those of the entire number of keywords. This study proposes a scheme that is able to obtain shopping-related keywords in a relatively simple manner. We could easily extract shopping-related keywords simply by examining transactions whose next visit is a shopping mall. The resultant shopping-related keyword set is expected to be a useful asset for many shopping malls that participate in keyword marketing. Moreover, the proposed methodology can be easily applied to the construction of special area-related keywords as well as shopping-related ones.

Comparison of rainfall-runoff performance based on various gridded precipitation datasets in the Mekong River basin (메콩강 유역의 격자형 강수 자료에 의한 강우-유출 모의 성능 비교·분석)

  • Kim, Younghun;Le, Xuan-Hien;Jung, Sungho;Yeon, Minho;Lee, Gihae
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.2
    • /
    • pp.75-89
    • /
    • 2023
  • As the Mekong River basin is a nationally shared river, it is difficult to collect precipitation data, and the quantitative and qualitative quality of the data sets differs from country to country, which may increase the uncertainty of hydrological analysis results. Recently, with the development of remote sensing technology, it has become easier to obtain grid-based precipitation products(GPPs), and various hydrological analysis studies have been conducted in unmeasured or large watersheds using GPPs. In this study, rainfall-runoff simulation in the Mekong River basin was conducted using the SWAT model, which is a quasi-distribution model with three satellite GPPs (TRMM, GSMaP, PERSIANN-CDR) and two GPPs (APHRODITE, GPCC). Four water level stations, Luang Prabang, Pakse, Stung Treng, and Kratie, which are major outlets of the main Mekong River, were selected, and the parameters of the SWAT model were calibrated using APHRODITE as an observation value for the period from 2001 to 2011 and runoff simulations were verified for the period form 2012 to 2013. In addition, using the ConvAE, a convolutional neural network model, spatio-temporal correction of original satellite precipitation products was performed, and rainfall-runoff performances were compared before and after correction of satellite precipitation products. The original satellite precipitation products and GPCC showed a quantitatively under- or over-estimated or spatially very different pattern compared to APHPRODITE, whereas, in the case of satellite precipitation prodcuts corrected using ConvAE, spatial correlation was dramatically improved. In the case of runoff simulation, the runoff simulation results using the satellite precipitation products corrected by ConvAE for all the outlets have significantly improved accuracy than the runoff results using original satellite precipitation products. Therefore, the bias correction technique using the ConvAE technique presented in this study can be applied in various hydrological analysis for large watersheds where rain guage network is not dense.

A Method of Reproducing the CCT of Natural Light using the Minimum Spectral Power Distribution for each Light Source of LED Lighting (LED 조명의 광원별 최소 분광분포를 사용하여 자연광 색온도를 재현하는 방법)

  • Yang-Soo Kim;Seung-Taek Oh;Jae-Hyun Lim
    • Journal of Internet Computing and Services
    • /
    • v.24 no.2
    • /
    • pp.19-26
    • /
    • 2023
  • Humans have adapted and evolved to natural light. However, as humans stay in indoor longer in modern times, the problem of biorhythm disturbance has been induced. To solve this problem, research is being conducted on lighting that reproduces the correlated color temperature(CCT) of natural light that varies from sunrise to sunset. In order to reproduce the CCT of natural light, multiple LED light sources with different CCTs are used to produce lighting, and then a control index DB is constructed by measuring and collecting the light characteristics of the combination of input currents for each light source in hundreds to thousands of steps, and then using it to control the lighting through the light characteristic matching method. The problem with this control method is that the more detailed the steps of the combination of input currents, the more time and economic costs are incurred. In this paper, an LED lighting control method that applies interpolation and combination calculation based on the minimum spectral power distribution information for each light source is proposed to reproduce the CCT of natural light. First, five minimum SPD information for each channel was measured and collected for the LED lighting, which consisted of light source channels with different CCTs and implemented input current control function of a 256-steps for each channel. Interpolation calculation was performed to generate SPD of 256 steps for each channel for the minimum SPD information, and SPD for all control combinations of LED lighting was generated through combination calculation of SPD for each channel. Illuminance and CCT were calculated through the generated SPD, a control index DB was constructed, and the CCT of natural light was reproduced through a matching technique. In the performance evaluation, the CCT for natural light was provided within the range of an average error rate of 0.18% while meeting the recommended indoor illumination standard.