The Variations of Stratospheric Ozone over the Korean Peninsula 1985~2009 (한반도 상공의 오존층 변화 1985~2009)
-
- Atmosphere
- /
- v.21 no.4
- /
- pp.349-359
- /
- 2011
The climatology in stratospheric ozone over the Korean Peninsula, presented in previous studies (e.g., Cho et al., 2003; Kim et al., 2005), is updated by using daily and monthly data from satellite and ground-based data through December 2009. In addition, long-term satellite data [Total Ozone Mapping Spectrometer (TOMS), Ozone Monitoring Instrument (OMI), 1979~2009] have been also analyzed in order to deduce the spatial distributions and temporal variations of the global total ozone. The global average of total ozone (1979~2009) is 298 DU which shows a minimum of about 244 DU in equatorial latitudes and increases poleward in both hemispheres to a maximum of about 391 DU in Okhotsk region. The recent period, from 2006 to 2009, shows reduction in total ozone by 6% relative to the values for the pre-1980s (1979~1982). The long-term trends were estimated by using a multiple linear regression model (e.g., WMO, 1999; Cho et al., 2003) including explanatory variables for the seasonal variation, Quasi-Biennial Oscillation (QBO) and solar cycle over three different time intervals: a whole interval from 1979 to 2009, the former interval from 1979 to 1992, and the later interval from 1993 to 2009 with a turnaround point of deep minimum in 1993 is related to the effect of Mt. Pinatubo eruption. The global trend shows -0.93%
Recently, the CO2 emission and energy consumption have become critical global issues to decide the future of nations. Especially, the spread of IT products and the increased use of internet and web applications result in the energy consumption and CO2 emission of IT industry though information technologies drive global economic growth. EU, the United States, Japan and other developed countries are using IT related environmental regulations such as WEEE(Waste Electrical and Electronic Equipment), RoHS(Restriction of the use of Certain Hazardous Substance), REACH(Registration, Evaluation, Authorization and Restriction of CHemicals) and EuP(Energy using Product), and have established systematic green business/IT strategies to enhance the competitiveness of IT industry. For example, the Japan government proposed the "Green IT initiative" for being compatible with economic growth and environmental protection. Not only energy saving technologies but energy saving systems have been developed for accomplishing sustainable development. Korea's CO2 emission and energy consumption continuously have grown at comparatively high rates. They are related to its industrial structure depending on high energy-consuming industries such as iron and steel Industry, automotive industry, shipbuilding industry, semiconductor industry, and so on. In particular, export proportion of IT manufacturing is quite high in Korea. For example, the global market share of the semiconductor such as DRAM was about 80% in 2008. Accordingly, Korea needs to establish a systematic strategy to respond to the global environmental regulations and to maintain competitiveness in the IT industry. However, green competitiveness of Korea ranked 11th among 15 major countries and R&D budget for green technology is not large enough to develop energy-saving technologies for infrastructure and value chain of low-carbon society though that grows at high rates. Moreover, there are no concrete action plans in Korea. This research aims to deduce the priorities of the Korean green business/IT strategies to use multi attribute weighted average method. We selected a panel of 19 experts who work at the green business related firms such as HP, IBM, Fujitsu and so on, and selected six assessment indices such as the urgency of the technology development, the technology gap between Korea and the developed countries, the effect of import substitution, the spillover effect of technology, the market growth, and the export potential of the package or stand-alone products by existing literature review. We submitted questionnaires at approximately weekly intervals to them for priorities of the green business/IT strategies. The strategies broadly classify as follows. The first strategy which consists of the green business/IT policy and standardization, process and performance management and IT industry and legislative alignment relates to government's role in the green economy. The second strategy relates to IT to support environment sustainability such as the travel and ways of working management, printer output and recycling, intelligent building, printer rationalization and collaboration and connectivity. The last strategy relates to green IT systems, services and usage such as the data center consolidation and energy management, hardware recycle decommission, server and storage virtualization, device power management, and service supplier management. All the questionnaires were assessed via a five-point Likert scale ranging from "very little" to "very large." Our findings show that the IT to support environment sustainability is prior to the other strategies. In detail, the green business /IT policy and standardization is the most important in the government's role. The strategies of intelligent building and the travel and ways of working management are prior to the others for supporting environment sustainability. Finally, the strategies for the data center consolidation and energy management and server and storage virtualization have the huge influence for green IT systems, services and usage This research results the following implications. The amount of energy consumption and CO2 emissions of IT equipment including electrical business equipment will need to be clearly indicated in order to manage the effect of green business/IT strategy. And it is necessary to develop tools that measure the performance of green business/IT by each step. Additionally, intelligent building could grow up in energy-saving, growth of low carbon and related industries together. It is necessary to expand the affect of virtualization though adjusting and controlling the relationship between the management teams.
To investigate the exchange rates of mercury(Hg) across soil-air boundary, we undertook the measurements of Hg flux using gradient technique from a major waste reclamation site, Nan-Ji-Do. Based on these measurement data, we attempted to provide insights into various aspects of Hg exchange in a strongly polluted soil environment. According to our analysis, the study site turned out to be not only a major emission source area but also a major sink area. When these data were compared on hourly basis over a full day scale, large fluxes of emission and deposition centered on daytime periods relative to nighttime periods. However, when comparison of frequency with which emission or deposition occurs was made, there emerged a very contrasting pattern. While emission was dominant during nighttime periods, deposition was most favored during daytime periods. When similar comparison was made as a function of wind direction, it was noticed that there may be a major Hg source at easterly direction to bring out significant deposition of Hg in the study area. To account for the environmental conditions controlling the vertical direction of Hg exchange, we compared environmental conditions for both the whole data group and those observed from the wind direction of strong deposition events. Results of this analysis indicated that the concentrations of pollutant species varied sensitively enough to reflect the environmental conditions for each direction of exchange. When correlation analysis was applied to our data, results indicated that windspeed and ozone concentrations best reflected changes in the magnitudes of emission/deposition fluxes. The results of factor analysis also indicated the possibility that Hg emission of study area is temperature-driven process, while that of deposition is affected by a mixed effects of various factors including temperature, ozone, and non-methane HCs. If the computed emission rate is extrapolated to the whole study area we estimate that annual emission of Hg from the study area can amount to approximately 6kg.
The demand for large-scale horticultural complexes utilizing reclaimed lands is increasing, and one of the pending issues for the construction of large-scale facilities is to establish foundation design criteria. In this paper, we tried to review previous studies on the method of reinforcing the foundation of soft ground. Target construction methods are spiral piles, wood piles, crushed stone piles and PF (point foundation) method. In order to evaluate the performance according to the basic construction method, pull-out resistance, bearing capacity, and settlement amount were measured. At the same diameter, pull-out resistance increased with increasing penetration depth. Simplified comparison is difficult due to the difference in reinforcement method, diameter, and penetration depth, but it showed high bearing capacity in the order of crushed stone pile, PF method, and wood pile foundation. In the case of wood piles, the increase in uplift resistance was different depending on the slenderness ratio. Wood, crushed stone pile and PF construction methods, which are foundation reinforcement works with a bearing capacity of 105 kN/㎡ to 826 kN/㎡, are considered sufficient methods to be applied to the greenhouse foundation. There was a limitation in grasping the consistent trend of each foundation reinforcement method through existing studies. If these data are supplemented through additional empirical tests, it is judged that a basic design guideline that can satisfy the structure and economic efficiency of the greenhouse can be presented.
With the development of the Internet, consumers have had an opportunity to check product information easily through E-Commerce. Product reviews used in the process of purchasing goods are based on user experience, allowing consumers to engage as producers of information as well as refer to information. This can be a way to increase the efficiency of purchasing decisions from the perspective of consumers, and from the seller's point of view, it can help develop products and strengthen their competitiveness. However, it takes a lot of time and effort to understand the overall assessment and assessment dimensions of the products that I think are important in reading the vast amount of product reviews offered by E-Commerce for the products consumers want to compare. This is because product reviews are unstructured information and it is difficult to read sentiment of reviews and assessment dimension immediately. For example, consumers who want to purchase a laptop would like to check the assessment of comparative products at each dimension, such as performance, weight, delivery, speed, and design. Therefore, in this paper, we would like to propose a method to automatically generate multi-dimensional product assessment scores in product reviews that we would like to compare. The methods presented in this study consist largely of two phases. One is the pre-preparation phase and the second is the individual product scoring phase. In the pre-preparation phase, a dimensioned classification model and a sentiment analysis model are created based on a review of the large category product group review. By combining word embedding and association analysis, the dimensioned classification model complements the limitation that word embedding methods for finding relevance between dimensions and words in existing studies see only the distance of words in sentences. Sentiment analysis models generate CNN models by organizing learning data tagged with positives and negatives on a phrase unit for accurate polarity detection. Through this, the individual product scoring phase applies the models pre-prepared for the phrase unit review. Multi-dimensional assessment scores can be obtained by aggregating them by assessment dimension according to the proportion of reviews organized like this, which are grouped among those that are judged to describe a specific dimension for each phrase. In the experiment of this paper, approximately 260,000 reviews of the large category product group are collected to form a dimensioned classification model and a sentiment analysis model. In addition, reviews of the laptops of S and L companies selling at E-Commerce are collected and used as experimental data, respectively. The dimensioned classification model classified individual product reviews broken down into phrases into six assessment dimensions and combined the existing word embedding method with an association analysis indicating frequency between words and dimensions. As a result of combining word embedding and association analysis, the accuracy of the model increased by 13.7%. The sentiment analysis models could be seen to closely analyze the assessment when they were taught in a phrase unit rather than in sentences. As a result, it was confirmed that the accuracy was 29.4% higher than the sentence-based model. Through this study, both sellers and consumers can expect efficient decision making in purchasing and product development, given that they can make multi-dimensional comparisons of products. In addition, text reviews, which are unstructured data, were transformed into objective values such as frequency and morpheme, and they were analysed together using word embedding and association analysis to improve the objectivity aspects of more precise multi-dimensional analysis and research. This will be an attractive analysis model in terms of not only enabling more effective service deployment during the evolving E-Commerce market and fierce competition, but also satisfying both customers.
From January 2020 to October 2021, more than 500,000 academic studies related to COVID-19 (Coronavirus-2, a fatal respiratory syndrome) have been published. The rapid increase in the number of papers related to COVID-19 is putting time and technical constraints on healthcare professionals and policy makers to quickly find important research. Therefore, in this study, we propose a method of extracting useful information from text data of extensive literature using LDA and Word2vec algorithm. Papers related to keywords to be searched were extracted from papers related to COVID-19, and detailed topics were identified. The data used the CORD-19 data set on Kaggle, a free academic resource prepared by major research groups and the White House to respond to the COVID-19 pandemic, updated weekly. The research methods are divided into two main categories. First, 41,062 articles were collected through data filtering and pre-processing of the abstracts of 47,110 academic papers including full text. For this purpose, the number of publications related to COVID-19 by year was analyzed through exploratory data analysis using a Python program, and the top 10 journals under active research were identified. LDA and Word2vec algorithm were used to derive research topics related to COVID-19, and after analyzing related words, similarity was measured. Second, papers containing 'vaccine' and 'treatment' were extracted from among the topics derived from all papers, and a total of 4,555 papers related to 'vaccine' and 5,971 papers related to 'treatment' were extracted. did For each collected paper, detailed topics were analyzed using LDA and Word2vec algorithms, and a clustering method through PCA dimension reduction was applied to visualize groups of papers with similar themes using the t-SNE algorithm. A noteworthy point from the results of this study is that the topics that were not derived from the topics derived for all papers being researched in relation to COVID-19 (