• Title/Summary/Keyword: Time series Analysis

Search Result 3,257, Processing Time 0.035 seconds

A Study on Intelligent Value Chain Network System based on Firms' Information (기업정보 기반 지능형 밸류체인 네트워크 시스템에 관한 연구)

  • Sung, Tae-Eung;Kim, Kang-Hoe;Moon, Young-Su;Lee, Ho-Shin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.67-88
    • /
    • 2018
  • Until recently, as we recognize the significance of sustainable growth and competitiveness of small-and-medium sized enterprises (SMEs), governmental support for tangible resources such as R&D, manpower, funds, etc. has been mainly provided. However, it is also true that the inefficiency of support systems such as underestimated or redundant support has been raised because there exist conflicting policies in terms of appropriateness, effectiveness and efficiency of business support. From the perspective of the government or a company, we believe that due to limited resources of SMEs technology development and capacity enhancement through collaboration with external sources is the basis for creating competitive advantage for companies, and also emphasize value creation activities for it. This is why value chain network analysis is necessary in order to analyze inter-company deal relationships from a series of value chains and visualize results through establishing knowledge ecosystems at the corporate level. There exist Technology Opportunity Discovery (TOD) system that provides information on relevant products or technology status of companies with patents through retrievals over patent, product, or company name, CRETOP and KISLINE which both allow to view company (financial) information and credit information, but there exists no online system that provides a list of similar (competitive) companies based on the analysis of value chain network or information on potential clients or demanders that can have business deals in future. Therefore, we focus on the "Value Chain Network System (VCNS)", a support partner for planning the corporate business strategy developed and managed by KISTI, and investigate the types of embedded network-based analysis modules, databases (D/Bs) to support them, and how to utilize the system efficiently. Further we explore the function of network visualization in intelligent value chain analysis system which becomes the core information to understand industrial structure ystem and to develop a company's new product development. In order for a company to have the competitive superiority over other companies, it is necessary to identify who are the competitors with patents or products currently being produced, and searching for similar companies or competitors by each type of industry is the key to securing competitiveness in the commercialization of the target company. In addition, transaction information, which becomes business activity between companies, plays an important role in providing information regarding potential customers when both parties enter similar fields together. Identifying a competitor at the enterprise or industry level by using a network map based on such inter-company sales information can be implemented as a core module of value chain analysis. The Value Chain Network System (VCNS) combines the concepts of value chain and industrial structure analysis with corporate information simply collected to date, so that it can grasp not only the market competition situation of individual companies but also the value chain relationship of a specific industry. Especially, it can be useful as an information analysis tool at the corporate level such as identification of industry structure, identification of competitor trends, analysis of competitors, locating suppliers (sellers) and demanders (buyers), industry trends by item, finding promising items, finding new entrants, finding core companies and items by value chain, and recognizing the patents with corresponding companies, etc. In addition, based on the objectivity and reliability of the analysis results from transaction deals information and financial data, it is expected that value chain network system will be utilized for various purposes such as information support for business evaluation, R&D decision support and mid-term or short-term demand forecasting, in particular to more than 15,000 member companies in Korea, employees in R&D service sectors government-funded research institutes and public organizations. In order to strengthen business competitiveness of companies, technology, patent and market information have been provided so far mainly by government agencies and private research-and-development service companies. This service has been presented in frames of patent analysis (mainly for rating, quantitative analysis) or market analysis (for market prediction and demand forecasting based on market reports). However, there was a limitation to solving the lack of information, which is one of the difficulties that firms in Korea often face in the stage of commercialization. In particular, it is much more difficult to obtain information about competitors and potential candidates. In this study, the real-time value chain analysis and visualization service module based on the proposed network map and the data in hands is compared with the expected market share, estimated sales volume, contact information (which implies potential suppliers for raw material / parts, and potential demanders for complete products / modules). In future research, we intend to carry out the in-depth research for further investigating the indices of competitive factors through participation of research subjects and newly developing competitive indices for competitors or substitute items, and to additively promoting with data mining techniques and algorithms for improving the performance of VCNS.

Construction and Application of Intelligent Decision Support System through Defense Ontology - Application example of Air Force Logistics Situation Management System (국방 온톨로지를 통한 지능형 의사결정지원시스템 구축 및 활용 - 공군 군수상황관리체계 적용 사례)

  • Jo, Wongi;Kim, Hak-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.77-97
    • /
    • 2019
  • The large amount of data that emerges from the initial connection environment of the Fourth Industrial Revolution is a major factor that distinguishes the Fourth Industrial Revolution from the existing production environment. This environment has two-sided features that allow it to produce data while using it. And the data produced so produces another value. Due to the massive scale of data, future information systems need to process more data in terms of quantities than existing information systems. In addition, in terms of quality, only a large amount of data, Ability is required. In a small-scale information system, it is possible for a person to accurately understand the system and obtain the necessary information, but in a variety of complex systems where it is difficult to understand the system accurately, it becomes increasingly difficult to acquire the desired information. In other words, more accurate processing of large amounts of data has become a basic condition for future information systems. This problem related to the efficient performance of the information system can be solved by building a semantic web which enables various information processing by expressing the collected data as an ontology that can be understood by not only people but also computers. For example, as in most other organizations, IT has been introduced in the military, and most of the work has been done through information systems. Currently, most of the work is done through information systems. As existing systems contain increasingly large amounts of data, efforts are needed to make the system easier to use through its data utilization. An ontology-based system has a large data semantic network through connection with other systems, and has a wide range of databases that can be utilized, and has the advantage of searching more precisely and quickly through relationships between predefined concepts. In this paper, we propose a defense ontology as a method for effective data management and decision support. In order to judge the applicability and effectiveness of the actual system, we reconstructed the existing air force munitions situation management system as an ontology based system. It is a system constructed to strengthen management and control of logistics situation of commanders and practitioners by providing real - time information on maintenance and distribution situation as it becomes difficult to use complicated logistics information system with large amount of data. Although it is a method to take pre-specified necessary information from the existing logistics system and display it as a web page, it is also difficult to confirm this system except for a few specified items in advance, and it is also time-consuming to extend the additional function if necessary And it is a system composed of category type without search function. Therefore, it has a disadvantage that it can be easily utilized only when the system is well known as in the existing system. The ontology-based logistics situation management system is designed to provide the intuitive visualization of the complex information of the existing logistics information system through the ontology. In order to construct the logistics situation management system through the ontology, And the useful functions such as performance - based logistics support contract management and component dictionary are further identified and included in the ontology. In order to confirm whether the constructed ontology can be used for decision support, it is necessary to implement a meaningful analysis function such as calculation of the utilization rate of the aircraft, inquiry about performance-based military contract. Especially, in contrast to building ontology database in ontology study in the past, in this study, time series data which change value according to time such as the state of aircraft by date are constructed by ontology, and through the constructed ontology, It is confirmed that it is possible to calculate the utilization rate based on various criteria as well as the computable utilization rate. In addition, the data related to performance-based logistics contracts introduced as a new maintenance method of aircraft and other munitions can be inquired into various contents, and it is easy to calculate performance indexes used in performance-based logistics contract through reasoning and functions. Of course, we propose a new performance index that complements the limitations of the currently applied performance indicators, and calculate it through the ontology, confirming the possibility of using the constructed ontology. Finally, it is possible to calculate the failure rate or reliability of each component, including MTBF data of the selected fault-tolerant item based on the actual part consumption performance. The reliability of the mission and the reliability of the system are calculated. In order to confirm the usability of the constructed ontology-based logistics situation management system, the proposed system through the Technology Acceptance Model (TAM), which is a representative model for measuring the acceptability of the technology, is more useful and convenient than the existing system.

Digital painting: Image transfonnation, simulation, heterologie and transfonnation (현대회화에서의 형태와 물질 -Digital Transfiguration에 관한 연구-)

  • Jeong, Suk-Yeong
    • Journal of Science of Art and Design
    • /
    • v.10
    • /
    • pp.161-181
    • /
    • 2006
  • The words which appeared in my theoretical study and work are image transformation to digital painting, simulation, heterologie and transfiguration, etc. Firstly, let's look into 'digital era' or 'new media era'. Nowadays, the image world including painting within the rapid social and cultural change, which is called as digital era, is having the dramatic change. Together with the development of scientific technology, large number of events which was deemed to be impossible is happening as real in image world Moreover, these changes in image world is greatly influencing to our life. The word which compresses this change of image world and shows is 'digital'. Digit, which means fingers in Latin, indicates separately changing signal, and to be more narrow, it indicates the continual signal of '0' and ' 1' in computer. The opposite word is 'analogue'. As analogue is the word meaning 'infer' or 'similarity', it indicates the signal or form which continuously changes along the series of time when it is compared to digital. Instead of analogue, digital is embossed as a major ruler along the whole area of our current culture. In whole culture and art area, and in whole generalscience, digital is appearing as it has the modernism and importance. The prefix, 'digital', e.g. digital media, digital culture, digital design, digital philosophy, etc, is treated as the synonym of modernism and something new. This advent of digital results the innovative change to the image world, creates the new beauty experience which we could not experience before, and forecasts the formation of advanced art and expansion of creative area. Various intellectual activities using computer is developing the whole world with making the infrastructure. Computer in painting work immediately accomplishes the idea of painters, takes part in simulation work, contingency such as abrupt reversal, extraction, twisting, shaking, obscureness, overlapping, etc, and timing to stimulate the creativity of painters, and provides digital formative language which enables new visual experience to the audience. When the change of digital era, the image appeared in my work is shown in 'transfiguration' like drawing. The word, 'transfiguration' does not indicate the completed and fixed real substance but indicate endlessly moving and floating shape. Thus, this concept is opposite to the substantial consideration, so that various concepts which is able to replace this in accordance with the similar cases are also exist such as change, deterioration, mutation, deformity of appearance and morphing which is frequently used in computer as a technical word. These concepts are not clearly classified, and variably and complicatedly related. Transfiguration basically means the denial of "objectivity' and '(continual) stagnation' or deviation from those. This phenomenon is appeared through the all art schools of art ever since the realism is denied in the 19th century. It is called as 'deformation' in case of expressionism, futurism, cubism, etc, in the beginning of the century, which its former indication is mostly preserved within the process of structural deviation and which has the realistic limit which should be preserved. On the contrary, dramatic transfiguration which has been showing in the modern era through surrealism is different in the point that dramatic transfiguration tends to show the deterioration and deviation rather than the preservation of indicated object. From this point, transfiguration coming out from morphing using computer deteriorates and hides the reality and furthermore, it replaces the 'reality'. Moreover, transfiguration is closely approached to the world of fake or 'imaginary' simulation world of Baudrillard. According to Baudrillard, the image hides and deteriorates the reality, and furthermore, expresses 'not existing' to 'imaginary' under the name of transfiguration. Certain reality, that is, image which is absent from the reality is created and overflowed, so that it finally replaces the reality. This is simulation as it is said by Baudrillard. In turn, Georges Bataille discusses about the image which is produced by digital technology in terms of heterologie. Image of heterologie is the visual signal which is established with the media. Image of media is to have the continuous characteristics of produce, extinction, and transformation, and its clear boundary between images becomes meaningless. The meaning of composition, excess, violation, etc of digital image is explained to heterological study or heteologie suggested as important meaning of Georges Bataille who is a heretic philosopher. As the form and image of mutation shows the shape in accordance with mechanical production, heterologie is introduced as very low materialism (or bas materialisme), in this theory. Heterologie as low materialism which is gradually changing is developing as a different concept and analysis because of the change of time in the late 20s century beside high or low meaning. Including my image, all images non-standardizes and transforms the code. However, reappearance and non-standardization of this code does not seem to be simple. The problem of transformation caused by transfiguration which appears in my digital drawing painting, simulation, heterologie, etc, are the continual problems. Moreover, the subject such as existence of human being, distance from the real life, politics and social problems are being extended to actual research and various expressing work. Especially, individual image world is established by digital painting transfiguration technique, and its change and review start to have the durability. The consciousness of observers who look at the image is changing the subject. Together with theoretical research, researchers are to establish the first step to approach to various image change of digital era painting through transfiguration technique using our realistic and historical image.

  • PDF

Analysis of Actual Condition on Subcontracting System in Korean Automotive Industry (자동차산업(自動車産業)의 하도급제(下都給制) 실태분석(實態分析))

  • Kim, Joo-hoon;Cho, Kwan-haeng
    • KDI Journal of Economic Policy
    • /
    • v.13 no.2
    • /
    • pp.69-96
    • /
    • 1991
  • Economic circumstances of enterprise began to change after a series of democratization measures in 1987. Accompanied with it, competitive advantage of enterprise began to change as well. By that time Korean enterprises had a competitive advantage based on low wages of labor. Abrupt and steady upsurge in wage, however, weakened competitive advantage based on low wages, upward revaluation of won currency caused by surplus in BOP strengthened upward trend in price increase of export products. An urgent problem in Korea economy is, therefore, to find 'new' competitive advantage. For the time being preserving competitiveness based on cost advantage must inevitably remain our basic strategy in industrial policy. While cost advantage in the past referred to low wage level, this cost advantage must have foundation on the improvment in producing technology, which will increase labor productivity and decrease unit cost of products. Besides, other measure to improve competitiveness can be considered such as increasing the extent of production automation, self-development of new products, and spread and strengthening subcontracting system among various enterprises. In this paper we tried to perceive how subcontracting system as a form of intercompany division of labor operates and to which direction this system proceeds responding to the recent changes in economic circumstances. Speaking more concretly, we tried to perceive how large the gap of bargaining power between mother-company and subcontracting company is and how effectively subcontracting company's technical power contributes to mother-company. Facing up to weakeening of competitiveness, how stably is the partnership between mother-company and subcontracting company established and what measures are being prepared to retore the weakened competitiveness. In conclusion the result of investigation through the questionaire on subcontracting system is positive, from which we can infer the optimistic view of restoring Korean economy's competitiveness.

  • PDF

Relationship Between Standardized Precipitation Index and Groundwater Levels: A Proposal for Establishment of Drought Index Wells (표준강수지수와 지하수위의 상관성 평가 및 가뭄관측정 설치 방안 고찰)

  • Kim Gyoo-Bum;Yun Han-Heum;Kim Dae-Ho
    • Journal of Soil and Groundwater Environment
    • /
    • v.11 no.3
    • /
    • pp.31-42
    • /
    • 2006
  • Drought indices, such as PDSI (palmer Drought Severity Index), SWSI (Surface Water Supply Index) and SPI (Standardized Precipitation Index), have been developed to assess and forecast an intensity of drought. To find the applicability of groundwater level data to a drought assessment, a correlation analysis between SPI and groundwater levels was conducted for each time series at a drought season in 2001. The comparative results between SPI and groundwater levels of shallow wells of three national groundwater monitoring stations, Chungju Gageum, Yangpyung Gaegun, and Yeongju Munjeong, show that these two factors are highly correlated. In case of SPI with a duration of 1 month, cross-correlation coefficients between two factors are 0.843 at Chungju Gageum, 0.825 at Yangpyung Gaegun, and 0.737 at Yeongju Munjeong. The time lag between peak values of two factors is nearly zero in case of SPI with a duration of 1 month, which means that groundwater level fluctuation is similar to SPI values. Moreover, in case of SPI with a duration of 3 month, it is found that groundwater level can be a leading indicator to predict the SPI values I week later. Some of the national groundwater monitoring stations can be designated as DIW (Drought Index Well) based on the detailed survey of site characteristics and also new DIWs need to be drilled to assess and forecast the drought in this country.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

A Comparative Study on Failure Pprediction Models for Small and Medium Manufacturing Company (중소제조기업의 부실예측모형 비교연구)

  • Hwangbo, Yun;Moon, Jong Geon
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.11 no.3
    • /
    • pp.1-15
    • /
    • 2016
  • This study has analyzed predication capabilities leveraging multi-variate model, logistic regression model, and artificial neural network model based on financial information of medium-small sized companies list in KOSDAQ. 83 delisted companies from 2009 to 2012 and 83 normal companies, i.e. 166 firms in total were sampled for the analysis. Modelling with training data was mobilized for 100 companies inlcuding 50 delisted ones and 50 normal ones at random out of the 166 companies. The rest of samples, 66 companies, were used to verify accuracies of the models. Each model was designed by carrying out T-test with 79 financial ratios for the last 5 years and identifying 9 significant variables. T-test has shown that financial profitability variables were major variables to predict a financial risk at an early stage, and financial stability variables and financial cashflow variables were identified as additional significant variables at a later stage of insolvency. When predication capabilities of the models were compared, for training data, a logistic regression model exhibited the highest accuracy while for test data, the artificial neural networks model provided the most accurate results. There are differences between the previous researches and this study as follows. Firstly, this study considered a time-series aspect in light of the fact that failure proceeds gradually. Secondly, while previous studies constructed a multivariate discriminant model ignoring normality, this study has reviewed the regularity of the independent variables, and performed comparisons with the other models. Policy implications of this study is that the reliability for the disclosure documents is important because the simptoms of firm's fail woule be shown on financial statements according to this paper. Therefore institutional arragements for restraing moral laxity from accounting firms or its workers should be strengthened.

  • PDF

Long-Term Variations of Water Quality in Jinhae Bay (진해만의 장기 수질변동 특성)

  • Kwon, Jung-No;Lee, Jangho;Kim, Youngsug;Lim, Jae-Hyun;Choi, Tae-Jun;Ye, Mi-Ju;Jun, Ji-Won;Kim, Seulmin
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.17 no.4
    • /
    • pp.324-332
    • /
    • 2014
  • In order to reveal the long-term variations of water quality in Jinhae Bay, water qualities had been monitored at 9 survey stations of Jinhae Bay during 2000~2012. The surface and bottom waters concentrations of chemical oxygen demand (COD), dissolved inorganic nitrogen (DIN), dissolved inorganic phosphorus (DIP), and chlorophyll-a (Chl.-a) were higher at the survey stations of Masan Bay than the stations of other Bays. Especially, station 1 which is located at the inner area of Masan Bay had the highest values in the concentrations of COD, DIN, and Chl.-a because there were terrestrial pollutant sources near the station 1 and sea current had not well circulated in the inner area of Masan Bay. In factor analysis, the station 1 also had the highest factor values related to factors which increase organic matters and nutrients in surface and bottom waters of Masan Bay. However, the stations (st.5, st.6, st.7, st.8, and st.9) of other Bays had lower values of the factors. In time series analysis, the COD concentrations of the bottom waters at 8 stations except for station 1 distinctly decreased. However, the COD concentrations of the surface waters showed no distinct decrease trends at all stations. In the concentrations of nutrients (DIN and DIP) of both surface and bottom waters, there were tremendous decrease trends at all stations. Therefore, these distinct decrease trends of the COD in bottom waters and the nutrients in surface and bottom waters of Jinhae Bay could have been associated with water improvement actions such as TPLMS (total pollution load management system).

Global Temperature Trends of Lower Stratosphere Derived from the Microwave Satellite Observations and GCM Reanalyses (마이크로파 위성관측과 모델 재분석에서 조사된 전지구에 대한 하부 성층권 온도의 추세)

  • Yoo, Jung-Moon;Yoon, Sun-Kyung;Kim, Kyu-Myong
    • Journal of the Korean earth science society
    • /
    • v.22 no.5
    • /
    • pp.388-404
    • /
    • 2001
  • In order to examine the relative accuracy of satellite observations and model reanalyses about lower stratospheric temperature trends, two satellite-observed Microwave Sounding Unit (MSU) channel 4 (Ch 4) brightness temperature data and two GCM (ECMWF and GEOS) reanalyses during 1981${\sim}$1993 have been intercompared with the regression analysis of time series. The satellite data for the period of 1980${\sim}$1999 are MSU4 at nadir direction and SC4 at multiple scans, respectively, derived in this study and Spencer and Christy (1993). The MSU4 temperature over the globe during the above period shows the cooling trend of -0.35 K/decade, and the cooling over the global ocean is 1.2 times as much as that over the land. Lower stratospheric temperatures during the common period (1981${\sim}$1993) globally show the cooling in MSU4 (-0.14 K/decade), SC4 (-0.42 K/decade) and GEOS (-0.15 K/decade) which have strong annual cycles. However, ECMWF shows a little warming and weak annual cycle. The 95% confidence intervals of the lower stratospheric temperature trends are greater than those of midtropospheric (channel 2) trends, indicating less confidence in Ch 4. The lapse rate in the trend between the above two atmospheric layers is largest over the northern hemispheric land. MSU4 has low correlation with ECMWF over the globe, and high value with GEOS near the Korean peninsula. Lower correlations (r < 0.6) between MSU4 and SC4 (or ECMWF) occur over $30^{\circ}$N latitude belt, where subtropical jet stream passes. Temporal correlation among them over the globe is generally high (r > 0.6). Four kinds of lower stratospheric temperature data near the Korean peninsula commonly show cooling trends, of which the SC4 values (-0.82 K/decade) is the largest.

  • PDF

Comparison of Multi-Satellite Sea Surface Temperatures and In-situ Temperatures from Ieodo Ocean Research Station (이어도 해양과학기지 관측 수온과 위성 해수면온도 합성장 자료와의 비교)

  • Woo, Hye-Jin;Park, Kyung-Ae;Choi, Do-Young;Byun, Do-Seung;Jeong, Kwang-Yeong;Lee, Eun-Il
    • Journal of the Korean earth science society
    • /
    • v.40 no.6
    • /
    • pp.613-623
    • /
    • 2019
  • Over the past decades, daily sea surface temperature (SST) composite data have been produced using periodically and extensively observed satellite SST data, and have been used for a variety of purposes, including climate change monitoring and oceanic and atmospheric forecasting. In this study, we evaluated the accuracy and analyzed the error characteristic of the SST composite data in the sea around the Korean Peninsula for optimal utilization in the regional seas. We evaluated the four types of multi-satellite SST composite data including OSTIA (Operational Sea Surface Temperature and Sea Ice Analysis), OISST (Optimum Interpolation Sea Surface Temperature), CMC (Canadian Meteorological Centre) SST, and MURSST (Multi-scale Ultra-high Resolution Sea Surface Temperature) collected from January 2016 to December 2016 by using in-situ temperature data measured from the Ieodo Ocean Research Station (IORS). Each SST composite data showed biases of the minimum of 0.12℃ (OISST) and the maximum of 0.55℃ (MURSST) and root mean square errors (RMSE) of the minimum of 0.77℃ (CMC SST) and the maximum of 0.96℃ (MURSST) for the in-situ temperature measurements from the IORS. Inter-comparison between the SST composite fields exhibited biases of -0.38-0.38℃ and RMSE of 0.55-0.82℃. The OSTIA and CMC SST data showed the smallest error while the OISST and MURSST data showed the most obvious error. The results of comparing time series by extracting the SST data at the closest point to the IORS showed that there was an apparent seasonal variation not only in the in-situ temperature from the IORS but also in all the SST composite data. In spring, however, SST composite data tended to be overestimated compared to the in-situ temperature observed from the IORS.