• Title/Summary/Keyword: Process-Parameters

Search Result 8,182, Processing Time 0.039 seconds

Optimum Population in Korea : An Economic Perspective (한국의 적정인구: 경제학적 관점)

  • Koo, Sung-Yeal
    • Korea journal of population studies
    • /
    • v.28 no.2
    • /
    • pp.1-32
    • /
    • 2005
  • The optimum population of a society or country can be defined as 'the population growth path that maximizes the welfare level of the society over the whole generations of both the present and the future, under the paths allowed by its endowments of production factors such as technology, capital and labor'. Thus, the optimum size or growth rate of population depends on: (i) the social welfare function, (ii) the production function, and (iii)demographic economic interrelationship which defines how the national income is disposed into consumption(birth and education of children included) and savings on the one hand and how the demographic and economic change induced thereby, in turn, affect production capacities on the other. The optimum population growth path can, then, be derived in the process of dynamic optimization of (i) under the constraints of (ii) and (iii), which will give us the optimum population growth rate defined as a function of parameters thereof. This paper estimates the optimum population growth rate of Korea by: specifying (i), (ii), and (iii) based on the recent development of economic theories, solving the dynamic optimization problem and inserting empirical estimates in Korea as the parametric values. The result shows that the optimum path of population growth in Korea is around TFR=1.81, which is affected most sensitively, in terms of the size of the partial elasticity around the optimum path, by the cost of children, share of capital income, consumption rate, time preference, population elasticity of utility function, etc. According to a survey implemented as a follow up study, there are quite a significant variations in the perceived cost of children, time preference rate, population elasticity of utility across different socio-economic classes in Korea, which implied that, compared to their counterparts, older generation and more highly educated classes prefer higher growth path for the population of Korea.

Distributional Characteristics of Fault Segments in Cretaceous and Tertiary Rocks from Southeastern Gyeongsang Basin (경상분지 남동부 일대의 백악기 및 제3기 암류에서 발달하는 단층분절의 분포특성)

  • Park, Deok-Won
    • The Journal of the Petrological Society of Korea
    • /
    • v.27 no.3
    • /
    • pp.109-120
    • /
    • 2018
  • The distributional characteristics of fault segments in Cretaceous and Tertiary rocks from southeastern Gyeongsang Basin were derived. The 267 sets of fault segments showing linear type were extracted from the curved fault lines delineated on the regional geological map. First, the directional angle(${\theta}$)-length(L) chart for the whole fault segments was made. From the related chart, the general d istribution pattern of fault segments was derived. The distribution curve in the chart was divided into four sections according to its overall shape. NNE, NNW and WNW directions, corresponding to the peaks of the above sections, indicate those of the Yangsan, Ulsan and Gaeum fault systems. The fault segment population show near symmetrical distribution with respect to $N19^{\circ}E$ direction corresponding to the maximum peak. Second, the directional angle-frequency(N), mean length(Lm), total length(Lt) and density(${\rho}$) chart was made. From the related chart, whole domain of the above chart was divided into 19 domains in terms of the phases of the distribution curve. The directions corresponding to the peaks of the above domains suggest the directions of representative stresses acted on rock body. Third, the length-cumulative frequency graphs for the 18 sub-populations were made. From the related chart, the value of exponent(${\lambda}$) increase in the clockwise direction($N10{\sim}20^{\circ}E{\rightarrow}N50{\sim}60^{\circ}E$) and counterclockwise direction ($N10{\sim}20^{\circ}W{\rightarrow}N50{\sim}60^{\circ}W$). On the other hand, the width of distribution of lengths and mean length decrease. The chart for the above sub-populations having mutually different evolution characteristics, reveals a cross section of evolutionary process. Fourth, the general distribution chart for the 18 graphs was made. From the related chart, the above graphs were classified into five groups(A~E) according to the distribution area. The lengths of fault segments increase in order of group E ($N80{\sim}90^{\circ}E{\cdot}N70{\sim}80^{\circ}E{\cdot}N80{\sim}90^{\circ}W{\cdot}N50{\sim}60^{\circ}W{\cdot}N30{\sim}40^{\circ}W{\cdot}N40{\sim}50^{\circ}W$) < D ($N70{\sim}80^{\circ}W{\cdot}N60{\sim}70^{\circ}W{\cdot}N60{\sim}70^{\circ}E{\cdot}N50{\sim}60^{\circ}E{\cdot}N40{\sim}50^{\circ}E{\cdot}N0{\sim}10^{\circ}W$) < C ($N20{\sim}30^{\circ}W{\cdot}N10{\sim}20^{\circ}W$) < B ($N0{\sim}10^{\circ}E{\cdot}N30{\sim}40^{\circ}E$) < A ($N20{\sim}30^{\circ}E{\cdot}N10{\sim}20^{\circ}E$). Especially the forms of graph gradually transition from a uniform distribution to an exponential one. Lastly, the values of the six parameters for fault-segment length were divided into five groups. Among the six parameters, mean length and length of the longest fault segment decrease in the order of group III ($N10^{\circ}W{\sim}N20^{\circ}E$) > IV ($N20{\sim}60^{\circ}E$) > II ($N10{\sim}60^{\circ}W$) > I ($N60{\sim}90^{\circ}W$) > V ($N60{\sim}90^{\circ}E$). Frequency, longest length, total length, mean length and density of fault segments, belonging to group V, show the lowest values. The above order of arrangement among five groups suggests the interrelationship with the relative formation ages of fault segments.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Optimization and Development of Prediction Model on the Removal Condition of Livestock Wastewater using a Response Surface Method in the Photo-Fenton Oxidation Process (Photo-Fenton 산화공정에서 반응표면분석법을 이용한 축산폐수의 COD 처리조건 최적화 및 예측식 수립)

  • Cho, Il-Hyoung;Chang, Soon-Woong;Lee, Si-Jin
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.30 no.6
    • /
    • pp.642-652
    • /
    • 2008
  • The aim of our research was to apply experimental design methodology in the optimization condition of Photo-Fenton oxidation of the residual livestock wastewater after the coagulation process. The reactions of Photo-Fenton oxidation were mathematically described as a function of parameters amount of Fe(II)($x_1$), $H_2O_2(x_2)$ and pH($x_3$) being modeled by the use of the Box-Behnken method, which was used for fitting 2nd order response surface models and was alternative to central composite designs. The application of RSM using the Box-Behnken method yielded the following regression equation, which is an empirical relationship between the removal(%) of livestock wastewater and test variables in coded unit: Y = 79.3 + 15.61x$_1$ - 7.31x$_2$ - 4.26x$_3$ - 18x$_1{^2}$ - 10x$_2{^2}$ - 11.9x$_3{^2}$ + 2.49x$_1$x$_2$ - 4.4x$_2$x$_3$ - 1.65x$_1$x$_3$. The model predicted also agreed with the experimentally observed result(R$^2$ = 0.96) The results show that the response of treatment removal(%) in Photo-Fenton oxidation of livestock wastewater were significantly affected by the synergistic effect of linear terms(Fe(II)($x_1$), $H_2O_2(x_2)$, pH(x$_3$)), whereas Fe(II) $\times$ Fe(II)(x$_1{^2}$), $H_2O_2$ $\times$ $H_2O_2$(x$_2{^2}$) and pH $\times$ pH(x$_3{^2}$) on the quadratic terms were significantly affected by the antagonistic effect. $H_2O_2$ $\times$ pH(x$_2$x$_3$) had also a antagonistic effect in the cross-product term. The estimated ridge of the expected maximum response and optimal conditions for Y using canonical analysis were 84 $\pm$ 0.95% and (Fe(II)(X$_1$) = 0.0146 mM, $H_2O_2$(X$_2$) = 0.0867 mM and pH(X$_3$) = 4.704, respectively. The optimal ratio of Fe/H$_2O_2$ was also 0.17 at the pH 4.7.

A study on Development Process of Fish Aquaculture in Japan - Case by Seabream Aquaculture - (일본 어류 양식업의 발전과정과 산지교체에 관한 연구 : 참돔양식업을 사례로)

  • 송정헌
    • The Journal of Fisheries Business Administration
    • /
    • v.34 no.2
    • /
    • pp.75-90
    • /
    • 2003
  • When we think of fundamental problems of the aquaculture industry, there are several strict conditions, and consequently the aquaculture industry is forced to change. Fish aquaculture has a structural supply surplus in production, aggravation of fishing grounds, stagnant low price due to recent recession, and drastic change of distribution circumstances. It is requested for us to initiate discussion on such issue as “how fish aquaculture establishes its status in the coastal fishery\ulcorner, will fish aquaculture grow in the future\ulcorner, and if so “how it will be restructured\ulcorner” The above issues can be observed in the mariculture of yellow tail, sea scallop and eel. But there have not been studied concerning seabream even though the production is over 30% of the total production of fish aquaculture in resent and it occupied an important status in the fish aquaculture. The objectives of this study is to forecast the future movement of sea bream aquaculture. The first goal of the study is to contribute to managerial and economic studies on the aquaculture industry. The second goal is to identify the factors influencing the competition between production areas and to identify the mechanisms involved. This study will examine the competitive power in individual producing area, its behavior, and its compulsory factors based on case study. Producing areas will be categorized according to following parameters : distance to market and availability of transportation, natural environment, the time of formation of producing areas (leaderㆍfollower), major production items, scale of business and producing areas, degree of organization in production and sales. As a factor in shaping the production area of sea bream aquaculture, natural conditions especially the water temperature is very important. Sea bream shows more active feeding and faster growth in areas located where the water temperature does not go below 13∼14$^{\circ}C$ during the winter. Also fish aquaculture is constrained by the transporting distance. Aquacultured yellowtail is a mass-produced and a mass-distributed item. It is sold a unit of cage and transported by ship. On the other hand, sea bream is sold in small amount in markets and transported by truck; so, the transportation cost is higher than yellow tail. Aquacultured sea bream has different product characteristics due to transport distance. We need to study live fish and fresh fish markets separately. Live fish was the original product form of aquacultured sea bream. Transportation of live fish has more constraints than the transportation of fresh fish. Death rate and distance are highly correlated. In addition, loading capacity of live fish is less than fresh fish. In the case of a 10 ton truck, live fish can only be loaded up to 1.5 tons. But, fresh fish which can be placed in a box can be loaded up to 5 to 6 tons. Because of this characteristics, live fish requires closer location to consumption area than fresh fish. In the consumption markets, the size of fresh fish is mainly 0.8 to 2kg.Live fish usually goes through auction, and quality is graded. Main purchaser comes from many small-sized restaurants, so a relatively small farmer and distributer can sell it. Aquacultured sea bream has been transacted as a fresh fish in GMS ,since 1993 when the price plummeted. Economies of scale works in case of fresh fish. The characteristics of fresh fish is as follows : As a large scale demander, General Merchandise Stores are the main purchasers of sea bream and the size of the fish is around 1.3kg. It mainly goes through negotiation. Aquacultured sea bream has been established as a representative food in General Merchandise Stores. GMS require stable and mass supply, consistent size, and low price. And Distribution of fresh fish is undertook by the large scale distributers, which can satisfy requirements of GMS. The market share in Tokyo Central Wholesale Market shows Mie Pref. is dominating in live fish. And Ehime Pref. is dominating in fresh fish. Ehime Pref. showed remarkable growth in 1990s. At present, the dealings of live fish is decreasing. However, the dealings of fresh fish is increasing in Tokyo Central Wholesale Market. The price of live fish is decreasing more than one of fresh fish. Even though Ehime Pref. has an ideal natural environment for sea bream aquaculture, its entry into sea bream aquaculture was late, because it was located at a further distance to consumers than the competing producing areas. However, Ehime Pref. became the number one producing areas through the sales of fresh fish in the 1990s. The production volume is almost 3 times the production volume of Mie Pref. which is the number two production area. More conversion from yellow tail aquaculture to sea bream aquaculture is taking place in Ehime Pref., because Kagosima Pref. has a better natural environment for yellow tail aquaculture. Transportation is worse than Mie Pref., but this region as a far-flung producing area makes up by increasing the business scale. Ehime Pref. increases the market share for fresh fish by creating demand from GMS. Ehime Pref. has developed market strategies such as a quick return at a small profit, a stable and mass supply and standardization in size. Ehime Pref. increases the market power by the capital of a large scale commission agent. Secondly Mie Pref. is close to markets and composed of small scale farmers. Mie Pref. switched to sea bream aquaculture early, because of the price decrease in aquacultured yellou tail and natural environmental problems. Mie Pref. had not changed until 1993 when the price of the sea bream plummeted. Because it had better natural environment and transportation. Mie Pref. has a suitable water temperature range required for sea bream aquaculture. However, the price of live sea bream continued to decline due to excessive production and economic recession. As a consequence, small scale farmers are faced with a market price below the average production cost in 1993. In such kind of situation, the small-sized and inefficient manager in Mie Pref. was obliged to withdraw from sea bream aquaculture. Kumamoto Pref. is located further from market sites and has an unsuitable nature environmental condition required for sea bream aquaculture. Although Kumamoto Pref. is trying to convert to the puffer fish aquaculture which requires different rearing techniques, aquaculture technique for puffer fish is not established yet.

  • PDF

Measuring Consumer-Brand Relationship Quality (소비자-브랜드 관계 품질 측정에 관한 연구)

  • Kang, Myung-Soo;Kim, Byoung-Jai;Shin, Jong-Chil
    • Journal of Global Scholars of Marketing Science
    • /
    • v.17 no.2
    • /
    • pp.111-131
    • /
    • 2007
  • As a brand becomes a core asset in creating a corporation's value, brand marketing has become one of core strategies that corporations pursue. Recently, for customer relationship management, possession and consumption of goods were centered on brand for the management. Thus, management related to this matter was developed. The main reason of the increased interest on the relationship between the brand and the consumer is due to acquisition of individual consumers and development of relationship with those consumers. Along with the development of relationship, a corporation is able to establish long-term relationships. This has become a competitive advantage for the corporation. All of these processes became the strategic assets of corporations. The importance and the increase of interest of a brand have also become a big issue academically. Brand equity, brand extension, brand identity, brand relationship, and brand community are the results derived from the interest of a brand. More specifically, in marketing, the study of brands has been led to the study of factors related to building of powerful brands and the process of building the brand. Recently, studies concentrated primarily on the consumer-brand relationship. The reason is that brand loyalty can not explain the dynamic quality aspects of loyalty, the consumer-brand relationship building process, and especially interactions between the brands and the consumers. In the studies of consumer-brand relationship, a brand is not just limited to possession or consumption objectives, but rather conceptualized as partners. Most of the studies from the past concentrated on the results of qualitative analysis of consumer-brand relationship to show the depth and width of the performance of consumer-brand relationship. Studies in Korea have been the same. Recently, studies of consumer-brand relationship started to concentrate on quantitative analysis rather than qualitative analysis or even go further with quantitative analysis to show effecting factors of consumer-brand relationship. Studies of new quantitative approaches show the possibilities of using the results as a new concept of viewing consumer-brand relationship and possibilities of applying these new concepts on marketing. Studies of consumer-brand relationship with quantitative approach already exist, but none of them include sub-dimensions of consumer-brand relationship, which presents theoretical proofs for measurement. In other words, most studies add up or average out the sub-dimensions of consumer-brand relationship. However, to do these kind of studies, precondition of sub-dimensions being in identical constructs is necessary. Therefore, most of the studies from the past do not meet conditions of sub-dimensions being as one dimension construct. From this, we question the validity of past studies and their limits. The main purpose of this paper is to overcome the limits shown from the past studies by practical use of previous studies on sub-dimensions in a one-dimensional construct (Naver & Slater, 1990; Cronin & Taylor, 1992; Chang & Chen, 1998). In this study, two arbitrary groups were classified to evaluate reliability of the measurements and reliability analyses were pursued on each group. For convergent validity, correlations, Cronbach's, one-factor solution exploratory analysis were used. For discriminant validity correlation of consumer-brand relationship was compared with that of an involvement, which is a similar concept with consumer-based relationship. It also indicated dependent correlations by Cohen and Cohen (1975, p.35) and results showed that it was different constructs from 6 sub-dimensions of consumer-brand relationship. Through the results of studies mentioned above, we were able to finalize that sub-dimensions of consumer-brand relationship can viewed from one-dimensional constructs. This means that the one-dimensional construct of consumer-brand relationship can be viewed with reliability and validity. The result of this research is theoretically meaningful in that it assumes consumer-brand relationship in a one-dimensional construct and provides the basis of methodologies which are previously preformed. It is thought that this research also provides the possibility of new research on consumer-brand relationship in that it gives root to the fact that it is possible to manipulate one-dimensional constructs consisting of consumer-brand relationship. In the case of previous research on consumer-brand relationship, consumer-brand relationship is classified into several types on the basis of components consisting of consumer-brand relationship and a number of studies have been performed with priority given to the types. However, as we can possibly manipulate a one-dimensional construct through this research, it is expected that various studies which make the level or strength of consumer-brand relationship practical application of construct will be performed, and not research focused on separate types of consumer-brand relationship. Additionally, we have the theoretical basis of probability in which to manipulate the consumer-brand relationship with one-dimensional constructs. It is anticipated that studies using this construct, which is consumer-brand relationship, practical use of dependent variables, parameters, mediators, and so on, will be performed.

  • PDF

Low Temperature Growth of MCN(M=Ti, Hf) Coating Layers by Plasma Enhanced MOCVD and Study on Their Characteristics (플라즈마 보조 유기금속 화학기상 증착법에 의한 MCN(M=Ti, Hf) 코팅막의 저온성장과 그들의 특성연구)

  • Boo, Jin-Hyo;Heo, Cheol-Ho;Cho, Yong-Ki;Yoon, Joo-Sun;Han, Jeon-G.
    • Journal of the Korean Vacuum Society
    • /
    • v.15 no.6
    • /
    • pp.563-575
    • /
    • 2006
  • Ti(C,N) films are synthesized by pulsed DC plasma enhanced chemical vapor deposition (PEMOCVD) using metal-organic compounds of tetrakis diethylamide titanium at $200-300^{\circ}C$. To compare plasma parameter, in this study, $H_2$ and $He/H_2$ gases are used as carrier gas. The effect of $N_2\;and\;NH_3$ gases as reactive gas is also evaluated in reduction of C content of the films. Radical formation and ionization behaviors in plasma are analyzed in-situ by optical emission spectroscopy (OES) at various pulsed bias voltages and gas species. He and $H_2$ mixture is very effective in enhancing ionization of radicals, especially for the $N_2$. Ammonia $(NH_3)$ gas also highly reduces the formation of CN radical, thereby decreasing C content of Ti(C, N) films in a great deal. The microhardness of film is obtained to be $1,250\;Hk_{0.01}\;to\;1,760\;Hk_{0.01}$ depending on gas species and bias voltage. Higher hardness can be obtained under the conditions of $H_2\;and\;N_2$ gases as well as bias voltage of 600 V. Hf(C, N) films were also obtained by pulsed DC PEMOCYB from tetrakis diethyl-amide hafnium and $N_2/He-H_2$ mixture. The depositions were carried out at temperature of below $300^{\circ}C$, total chamber pressure of 1 Torr and varying the deposition parameters. Influences of the nitrogen contents in the plasma decreased the growth rate and attributed to amorphous components, to the high carbon content of the film. In XRD analysis the domain lattice plain was (111) direction and the maximum microhardness was observed to be $2,460\;Hk_{0.025}$ for a Hf(C,N) film grown under -600 V and 0.1 flow rate of nitrogen. The optical emission spectra measured during PEMOCVD processes of Hf(C, N) film growth were also discussed. $N_2,\;N_2^+$, H, He, CH, CN radicals and metal species(Hf) were detected and CH, CN radicals that make an important role of total PEMOCVD process increased carbon content.

Application and Analysis of Ocean Remote-Sensing Reflectance Quality Assurance Algorithm for GOCI-II (천리안해양위성 2호(GOCI-II) 원격반사도 품질 검증 시스템 적용 및 결과)

  • Sujung Bae;Eunkyung Lee;Jianwei Wei;Kyeong-sang Lee;Minsang Kim;Jong-kuk Choi;Jae Hyun Ahn
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1565-1576
    • /
    • 2023
  • An atmospheric correction algorithm based on the radiative transfer model is required to obtain remote-sensing reflectance (Rrs) from the Geostationary Ocean Color Imager-II (GOCI-II) observed at the top-of-atmosphere. This Rrs derived from the atmospheric correction is utilized to estimate various marine environmental parameters such as chlorophyll-a concentration, total suspended materials concentration, and absorption of dissolved organic matter. Therefore, an atmospheric correction is a fundamental algorithm as it significantly impacts the reliability of all other color products. However, in clear waters, for example, atmospheric path radiance exceeds more than ten times higher than the water-leaving radiance in the blue wavelengths. This implies atmospheric correction is a highly error-sensitive process with a 1% error in estimating atmospheric radiance in the atmospheric correction process can cause more than 10% errors. Therefore, the quality assessment of Rrs after the atmospheric correction is essential for ensuring reliable ocean environment analysis using ocean color satellite data. In this study, a Quality Assurance (QA) algorithm based on in-situ Rrs data, which has been archived into a database using Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Bio-optical Archive and Storage System (SeaBASS), was applied and modified to consider the different spectral characteristics of GOCI-II. This method is officially employed in the National Oceanic and Atmospheric Administration (NOAA)'s ocean color satellite data processing system. It provides quality analysis scores for Rrs ranging from 0 to 1 and classifies the water types into 23 categories. When the QA algorithm is applied to the initial phase of GOCI-II data with less calibration, it shows the highest frequency at a relatively low score of 0.625. However, when the algorithm is applied to the improved GOCI-II atmospheric correction results with updated calibrations, it shows the highest frequency at a higher score of 0.875 compared to the previous results. The water types analysis using the QA algorithm indicated that parts of the East Sea, South Sea, and the Northwest Pacific Ocean are primarily characterized as relatively clear case-I waters, while the coastal areas of the Yellow Sea and the East China Sea are mainly classified as highly turbid case-II waters. We expect that the QA algorithm will support GOCI-II users in terms of not only statistically identifying Rrs resulted with significant errors but also more reliable calibration with quality assured data. The algorithm will be included in the level-2 flag data provided with GOCI-II atmospheric correction.

A Study on the Overall Economic Risks of a Hypothetical Severe Accident in Nuclear Power Plant Using the Delphi Method (델파이 기법을 이용한 원전사고의 종합적인 경제적 리스크 평가)

  • Jang, Han-Ki;Kim, Joo-Yeon;Lee, Jai-Ki
    • Journal of Radiation Protection and Research
    • /
    • v.33 no.4
    • /
    • pp.127-134
    • /
    • 2008
  • Potential economic impact of a hypothetical severe accident at a nuclear power plant(Uljin units 3/4) was estimated by applying the Delphi method, which is based on the expert judgements and opinions, in the process of quantifying uncertain factors. For the purpose of this study, it is assumed that the radioactive plume directs the inland direction. Since the economic risk can be divided into direct costs and indirect effects and more uncertainties are involved in the latter, the direct costs were estimated first and the indirect effects were then estimated by applying a weighting factor to the direct cost. The Delphi method however subjects to risk of distortion or discrimination of variables because of the human behavior pattern. A mathematical approach based on the Bayesian inferences was employed for data processing to improve the Delphi results. For this task, a model for data processing was developed. One-dimensional Monte Carlo Analysis was applied to get a distribution of values of the weighting factor. The mean and median values of the weighting factor for the indirect effects appeared to be 2.59 and 2.08, respectively. These values are higher than the value suggested by OECD/NEA, 1.25. Some factors such as small territory and public attitude sensitive to radiation could affect the judgement of panel. Then the parameters of the model for estimating the direct costs were classified as U- and V-types, and two-dimensional Monte Carlo analysis was applied to quantify the overall economic risk. The resulting median of the overall economic risk was about 3.9% of the gross domestic products(GDP) of Korea in 2006. When the cost of electricity loss, the highest direct cost, was not taken into account, the overall economic risk was reduced to 2.2% of GDP. This assessment can be used as a reference for justifying the radiological emergency planning and preparedness.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.