The style of life shown by Elder Lee Sang-dong through the encounter between Confucianism and early Protestantism (이상동 장로가 유교와 초기 개신교 만남으로 보여준 삶의 양식)
-
- Journal of Christian Education in Korea
- /
- v.78
- /
- pp.153-189
- /
- 2024
This study sought to find the characteristics of the Protestant faith that emerged during the early missionary work of Korean Protestants in the Andong region of Gyeongsang Province, where Confucianism was developed. In the early days of Korean Protestantism (1905-1935), it focused on the life and lifestyle of Elder Lee Sang-dong, a nobleman with a background in Toegye Confucianism, who converted from Confucianism. Elder Sang-dong Lee's life and journey of faith can be illuminated and the implications can be connected through the theology of the faith community by Christian education scholar J. h. Westerhoff III. Westerhoff viewed Christian education as forming the values and worldview of individuals in the community while the faith community adapts to society and culture. Westerhoff's view of Christian education is that these values appear as a way of life within social and cultural processes, and this life helps to reveal various aspects of life based on different environments. As Sang-dong Lee began reading the Bible, he came to believe in Jesus and accepted the worldview of the Bible. The values o f the Bible accepted in this way opened up a world view shown by the Christian Bible rather than Confucian Toegye Neo-Confucianism in the encounter between Confucianism in the late Joseon Dynasty and early Protestant church history. Thus, he lived the lifestyle of a believer who put the words of the Bible into practice in the life of a Confucianism nobleman. He founded the Posan-dong Church and started a church with a martyrdom faith community. He was the first in Andong to sing the March 1st Independence Movement on his own, advocated the Korean Independence Movement, liberated slaves and demonstrated the equality movement, and established new education at DeoksinSeosuk. By implementing it, it faithfully fulfilled its role as a teacher of the enlightenment movement and catechesis. In the early days of Korean Protestantism, Lee Sang-dong, a layman who held the office of elder rather than a minister in an institutional church, is a practical example of the values and lifestyle shown through the encounter between Confucianism and Protestantism in the Andong region, the stronghold of Confucianism. It can be seen as providing deep insight in modern church history and from the perspective of Christian education.
Ultra-violet (UV) light is one of abiotic stress factors and causes oxidative stress in plants, but a suitable level of UV radiation can be used to enhance the phytochemical content of plants. The accumulation of antioxidant phenolic compounds in UV-exposed plants may vary depending on the conditions of plant (species, cultivar, age, etc.) and UV (wavelength, energy, irradiation period, etc.). To date, however, little research has been conducted on how leaf thickness affects the pattern of phytochemical accumulation. In this study, we conducted an experiment to find out how the antioxidant phenolic content of kale (Brassica oleracea var. acephala) leaves with different thicknesses react to UV-A light. Kale seedlings were grown in a controlled growth chamber for four weeks under the following conditions: 20℃ temperature, 60% relative humidity, 12-hour photoperiod, light source (fluorescent lamp), and photosynthetic photon flux density of 121±10 µmol m-2 s-1. The kale plants were then transferred to two chambers with different CO2 concentrations (382±3.2 and 1,027±11.7 µmol mol-1), and grown for 10 days. After then, each group of kale plants were subjected to UV-A LED (275+285 nm at peak wavelength) light of 25.4 W m-2 for 5 days. As a result, when kale plants with thickened leaves from treatment with high CO2 were exposed to UV-A, they had lower UV sensitivity than thinner leaves. The Fv/Fm (maximum quantum yield on photosystem II) in the leaves of kale exposed to UV-A in a low-concentration CO2 environment decreased abruptly and significantly immediately after UV treatment, but not in kale leaves exposed to UV-A in a high-concentration CO2 environment. The accumulation pattern of total phenolic content, antioxidant capacity and individual phenolic compounds varied according to leaf thickness. In conclusion, this experiment suggests that the UV intensity should vary based on the leaf thickness (age etc.) during UV treatment for phytochemical enhancement.
Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.
Brand equity is one of the most important concepts in business practice as well as in academic research. Successful brands can allow marketers to gain competitive advantage (Lassar et al.,1995), including the opportunity for successful extensions, resilience against competitors' promotional pressures, and the ability to create barriers to competitive entry (Farquhar, 1989). Branding plays a special role in service firms because strong brands increase trust in intangible products (Berry, 2000), enabling customers to better visualize and understand them. They reduce customers' perceived monetary, social, and safety risks in buying services, which are obstacles to evaluating a service correctly before purchase. Also, a high level of brand equity increases consumer satisfaction, repurchasing intent, and degree of loyalty. Brand equity can be considered as a mixture that includes both financial assets and relationships. Actually, brand equity can be viewed as the value added to the product (Keller, 1993), or the perceived value of the product in consumers' minds. Mahajan et al. (1990) claim that customer-based brand equity can be measured by the level of consumers' perceptions. Several researchers discuss brand equity based on two dimensions: consumer perception and consumer behavior. Aaker (1991) suggests measuring brand equity through price premium, loyalty, perceived quality, and brand associations. Viewing brand equity as the consumer's behavior toward a brand, Keller (1993) proposes similar dimensions: brand awareness and brand knowledge. Thus, past studies tend to identify brand equity as a multidimensional construct consisted of brand loyalty, brand awareness, brand knowledge, customer satisfaction, perceived equity, brand associations, and other proprietary assets (Aaker, 1991, 1996; Blackston, 1995; Cobb-Walgren et al., 1995; Na, 1995). Other studies tend to regard brand equity and other brand assets, such as brand knowledge, brand awareness, brand image, brand loyalty, perceived quality, and so on, as independent but related constructs (Keller, 1993; Kirmani and Zeithaml, 1993). Walters(1978) defined information search as, "A psychological or physical action a consumer takes in order to acquire information about a product or store." But, each consumer has different methods for informationsearch. There are two methods of information search, internal and external search. Internal search is, "Search of information already saved in the memory of the individual consumer"(Engel, Blackwell, 1982) which is, "memory of a previous purchase experience or information from a previous search."(Beales, Mazis, Salop, and Staelin, 1981). External search is "A completely voluntary decision made in order to obtain new information"(Engel & Blackwell, 1982) which is, "Actions of a consumer to acquire necessary information by such methods as intentionally exposing oneself to advertisements, taking to friends or family or visiting a store."(Beales, Mazis, Salop, and Staelin, 1981). There are many sources for consumers' information search including advertisement sources such as the internet, radio, television, newspapers and magazines, information supplied by businesses such as sales people, packaging and in-store information, consumer sources such as family, friends and colleagues, and mass media sources such as consumer protection agencies, government agencies and mass media sources. Understanding consumers' purchasing behavior is a key factor of a firm to attract and retain customers and improving the firm's prospects for survival and growth, and enhancing shareholder's value. Therefore, marketers should understand consumer as individual and market segment. One theory of consumer behavior supports the belief that individuals are rational. Individuals think and move through stages when making a purchase decision. This means that rational thinkers have led to the identification of a consumer buying decision process. This decision process with its different levels of involvement and influencing factors has been widely accepted and is fundamental to the understanding purchase intention represent to what consumers think they will buy. Brand equity is not only companies but also very important asset more than product itself. This paper studies brand equity model and influencing factors including information process such as information searching and information resources in the fashion market in Asia and Europe. Information searching and information resources are influencing brand knowledge that influences consumers purchase decision. Nine research hypotheses are drawn to test the relationships among antecedents of brand equity and purchase intention and relationships among brand knowledge, brand value, brand attitude, and brand loyalty. H1. Information searching influences brand knowledge positively. H2. Information sources influence brand knowledge positively. H3. Brand knowledge influences brand attitude. H4. Brand knowledge influences brand value. H5. Brand attitude influences brand loyalty. H6. Brand attitude influences brand value. H7. Brand loyalty influences purchase intention. H8. Brand value influence purchase intention. H9. There will be the same research model in Asia and Europe. We performed structural equation model analysis in order to test hypotheses suggested in this study. The model fitting index of the research model in Asia was
Recommender system has become one of the most important technologies in e-commerce in these days. The ultimate reason to shop online, for many consumers, is to reduce the efforts for information search and purchase. Recommender system is a key technology to serve these needs. Many of the past studies about recommender systems have been devoted to developing and improving recommendation algorithms and collaborative filtering (CF) is known to be the most successful one. Despite its success, however, CF has several shortcomings such as cold-start, sparsity, gray sheep problems. In order to be able to generate recommendations, ordinary CF algorithms require evaluations or preference information directly from users. For new users who do not have any evaluations or preference information, therefore, CF cannot come up with recommendations (Cold-star problem). As the numbers of products and customers increase, the scale of the data increases exponentially and most of the data cells are empty. This sparse dataset makes computation for recommendation extremely hard (Sparsity problem). Since CF is based on the assumption that there are groups of users sharing common preferences or tastes, CF becomes inaccurate if there are many users with rare and unique tastes (Gray sheep problem). This study proposes a new algorithm that utilizes Social Network Analysis (SNA) techniques to resolve the gray sheep problem. We utilize 'degree centrality' in SNA to identify users with unique preferences (gray sheep). Degree centrality in SNA refers to the number of direct links to and from a node. In a network of users who are connected through common preferences or tastes, those with unique tastes have fewer links to other users (nodes) and they are isolated from other users. Therefore, gray sheep can be identified by calculating degree centrality of each node. We divide the dataset into two, gray sheep and others, based on the degree centrality of the users. Then, different similarity measures and recommendation methods are applied to these two datasets. More detail algorithm is as follows: Step 1: Convert the initial data which is a two-mode network (user to item) into an one-mode network (user to user). Step 2: Calculate degree centrality of each node and separate those nodes having degree centrality values lower than the pre-set threshold. The threshold value is determined by simulations such that the accuracy of CF for the remaining dataset is maximized. Step 3: Ordinary CF algorithm is applied to the remaining dataset. Step 4: Since the separated dataset consist of users with unique tastes, an ordinary CF algorithm cannot generate recommendations for them. A 'popular item' method is used to generate recommendations for these users. The F measures of the two datasets are weighted by the numbers of nodes and summed to be used as the final performance metric. In order to test performance improvement by this new algorithm, an empirical study was conducted using a publically available dataset - the MovieLens data by GroupLens research team. We used 100,000 evaluations by 943 users on 1,682 movies. The proposed algorithm was compared with an ordinary CF algorithm utilizing 'Best-N-neighbors' and 'Cosine' similarity method. The empirical results show that F measure was improved about 11% on average when the proposed algorithm was used