• Title/Summary/Keyword: Selection Process

Search Result 3,368, Processing Time 0.037 seconds

The Mediating Role of Perceived Risk in the Relationships Between Enduring Product Involvement and Trust Expectation (지속적 제품관여도와 소비자 요구신뢰수준 간의 영향관계: 인지된 위험의 매개 역할에 대한 실증분석을 중심으로)

  • Hong, Ilyoo B.;Kim, Taeha;Cha, Hoon S.
    • Asia pacific journal of information systems
    • /
    • v.23 no.4
    • /
    • pp.103-128
    • /
    • 2013
  • When a consumer needs a product or service and multiple sellers are available online, the process of selecting a seller to buy online from is complex since the process involves many behavioral dimensions that have to be taken into account. As a part of this selection process, consumers may set minimum trust expectation that can be used to screen out less trustworthy sellers. In the previous research, the level of consumers' trust expectation has been anchored on two important factors: product involvement and perceived risk. Product involvement refers to the extent to which a consumer perceives a specific product important. Thus, the higher product involvement may result in the higher trust expectation in sellers. On the other hand, other related studies found that when consumers perceived a higher level of risk (e.g., credit card fraud risk), they set higher trust expectation as well. While abundant research exists addressing the relationship between product involvement and perceived risk, little attention has been paid to the integrative view of the link between the two constructs and their impacts on the trust expectation. The present paper is a step toward filling this research gap. The purpose of this paper is to understand the process by which a consumer chooses an online merchant by examining the relationships among product involvement, perceived risk, trust expectation, and intention to buy from an e-tailer. We specifically focus on the mediating role of perceived risk in the relationships between enduring product involvement and the trust expectation. That is, we question whether product involvement affects the trust expectation directly without mediation or indirectly mediated by perceived risk. The research model with four hypotheses was initially tested using data gathered from 635 respondents through an online survey method. The structural equation modeling technique with partial least square was used to validate the instrument and the proposed model. The results showed that three out of the four hypotheses formulated were supported. First, we found that the intention to buy from a digital storefront is positively and significantly influenced by the trust expectation, providing support for H4 (trust expectation ${\rightarrow}$ purchase intention). Second, perceived risk was found to be a strong predictor of trust expectation, supporting H2 as well (perceived risk ${\rightarrow}$ trust expectation). Third, we did not find any evidence of direct influence of product involvement, which caused H3 to be rejected (product involvement ${\rightarrow}$ trust expectation). Finally, we found significant positive relationship between product involvement and perceived risk (H1: product involvement ${\rightarrow}$ perceived risk), which suggests that the possibility of complete mediation of perceived risk in the relationship between enduring product involvement and the trust expectation. As a result, we conducted an additional test for the mediation effect by comparing the original model with the revised model without the mediator variable of perceived risk. Indeed, we found that there exists a strong influence of product involvement on the trust expectation (by intentionally eliminating the variable of perceived risk) that was suppressed (i.e., mediated) by the perceived risk in the original model. The Sobel test statistically confirmed the complete mediation effect. Results of this study offer the following key findings. First, enduring product involvement is positively related to perceived risk, implying that the higher a consumer is enduringly involved with a given product, the greater risk he or she is likely to perceive with regards to the online purchase of the product. Second, perceived risk is positively related to trust expectation. A consumer with great risk perceptions concerning the online purchase is likely to buy from a highly trustworthy online merchant, thereby mitigating potential risks. Finally, product involvement was found to have no direct influence on trust expectation, but the relationship between the two constructs was indirect and mediated by the perceived risk. This is perhaps an important theoretical integration of two separate streams of literature on product involvement and perceived risk. The present research also provides useful implications for practitioners as well as academicians. First, one implication for practicing managers in online retail stores is that they should invest in reducing the perceived risk of consumers in order to lower down the trust expectation and thus increasing the consumer's intention to purchase products or services. Second, an academic implication is that perceived risk mediates the relationship between enduring product involvement and trust expectation. Further research is needed to elaborate the theoretical relationships among the constructs under consideration.

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

A Comparative Study on Confirmation Hearings for Secretary of Education in South Korea and the United State - Focus Cases on Administrations of Myungbak Lee and Barack Obama - (한국과 미국 교육부 장관 인사청문회 비교 - 이명박 정부와 오바마 정부의 사례를 중심으로 -)

  • Yoo, Dong-Hoon;Jin, Sun-Mi
    • Korean Journal of Comparative Education
    • /
    • v.26 no.3
    • /
    • pp.103-132
    • /
    • 2016
  • This study aims to suggest ways of improving the quality of confirmation hearings for the Secretary of Education in South Korea by: 1) comparing the confirmation process by the presidents in South Korea and the United States; and 2) contrasting procedures and contents of hearings for Education Secretary nominee in South Korea and the United States. As the process of selecting a nominee to be the Secretary of Education started, the Blue House Office of Secretary conducted an investigation on the nominee's personal details, family matters, and etc within a week. The investigation, with very limited time frame, led the selection process to be a mere verification on the nominee's morality. On the other hand, the White House Office of Presidential Personnel carried out a thorough investigation on the nominee collectively with the White House Council, Federal Bureau of Investigation (FBI), and Internal Revenue Service, taking from two to three months. In terms of contents of the hearings, the members of the ruling party mainly asked the nominee for clarification, and his ideas on certain policies, whereas the opposition party focused mostly on verifying his morality. In addition, the committee members led the hearing whilst strongly expressing their own political ideologies. However, in the case of the hearings in the United States, the committee members did not ask any questions to verify the nominee's morality but questions that could help them to get an understanding of the nominee's experience, professionalism, and perspective on nation- wide issues regarding education and federal education policy. As for the procedural characteristics of South Korean hearings, the Committee on Education conducted the hearing with a week of advanced preparation. However, submission of required reports by the nominee, performing confirmation hearings, and reports on the hearing were not mandatory in order to appoint the nominee as the Secretary of Education. On the contrary, in the United States, the members of the Committee on Health, Education, Labor, and Pension spent about a month preparing for the confirmation hearing. For the nominee to be appointed, submission of reports and the committee's approval on the President's nomination were required. Based on the results, this research suggests that it is important to develop a policy that can strengthen the substantiality of the nomination process, to establish a professional agency for personnel investigation, to make a mandatory submission of personal reports before hearings, to extend the time frame for hearing preparation, to secure enough time slot for nominees to respond, and to increase the member's autonomy.

The Persuit of Rationality and the Mathematics Education (합리성의 추구와 수학교육)

  • Kang Wan
    • The Mathematical Education
    • /
    • v.24 no.2
    • /
    • pp.105-116
    • /
    • 1986
  • For any thought and knowledge, its growth and development has close relation with the society where it is developed and grow. As Feuerbach says, the birth of spirit needs an existence of two human beings, i. e. the social background, as well as the birth of body does. But, at the educational viewpoint, the spread and the growth of such a thought or knowledge that influence favorably the development of a society must be also considered. We would discuss the goal and the function of mathematics education in relation with the prosperity of a technological civilization. But, the goal and the function are not unrelated with the spiritual culture which is basis of the technological civilization. Most societies of today can be called open democratic societies or societies which are at least standing such. The concept of rationality in such societies is a methodological principle which completes the democratic society. At the same time, it is asserted as an educational value concept which explains comprehensively the standpoint and the attitude of one who is educated in such a society. Especially, we can considered the cultivation of a mathematical thinking or a logical thinking in the goal of mathematics education as a concept which is included in such an educational value concept. The use of the concept of rationality depends on various viewpoints and criterions. We can analyze the concept of rationality at two aspects, one is the aspect of human behavior and the other is that of human belief or knowledge. Generally speaking, the rationality in human behavior means a problem solving power or a reasoning power as an instrument, i. e. the human economical cast of mind. But, the conceptual condition like this cannot include value concept. On the other hand, the rationality in human knowledge is related with the problem of rationality in human belief. For any statement which represents a certain sort of knowledge, its universal validity cannot be assured. The statements of value judgment which represent the philosophical knowledge cannot but relate to the argument on the rationality in human belief, because their finality do not easily turn out to be true or false. The positive statements in science also relate to the argument on the rationality in human belief, because there are no necessary relations between the proposition which states the all-pervasive rule and the proposition which is induced from the results of observation. Especially, the logical statement in logic or mathematics resolves itself into a question of the rationality in human belief after all, because all the logical proposition have their logical propriety in a certain deductive system which must start from some axioms, and the selection and construction of an axiomatic system cannot but depend on the belief of a man himself. Thus, we can conclude that a question of the rationality in knowledge or belief is a question of the rationality both in the content of belief or knowledge and in the process where one holds his own belief. And the rationality of both the content and the process is namely an deal form of a human ability and attitude in one's rational behavior. Considering the advancement of mathematical knowledge, we can say that mathematics is a good example which reflects such a human rationality, i. e. the human ability and attitude. By this property of mathematics itself, mathematics is deeply rooted as a good. subject which as needed in moulding the ability and attitude of a rational person who contributes to the development of the open democratic society he belongs to. But, it is needed to analyze the practicing and pursuing the rationality especially in mathematics education. Mathematics teacher must aim the rationality of process where the mathematical belief is maintained. In fact, there is no problem in the rationality of content as long the mathematics teacher does not draw mathematical conclusions without bases. But, in the mathematical activities he presents in his class, mathematics teacher must be able to show hem together with what even his own belief on the efficiency and propriety of mathematical activites can be altered and advanced by a new thinking or new experiences.

  • PDF

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

A Basic Study on the Characteristics of the Modern Garden in Incheon During the Opening Period - Focused on Rikidake's Villa - (개항기 인천 근대정원의 조영특성에 관한 기초연구 - 리키다케 별장을 중심으로 -)

  • Jin, Hye-Young;Shin, Hyun-Sil
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.38 no.3
    • /
    • pp.83-91
    • /
    • 2020
  • The purpose of this study is to examine the process of formation of modern gardens. Based on the analysis of the process of formation and transformation of the Jemulpo in Incheon and the details of the modern garden construction. The results are as follows; First, the formation of the Incheon Residence Site began in 1876 with the signing of the Joseon-Japan Treaty. Jemulpo used to be a desolate fishing village in the past, but after its opening in 1881, the Japanese settlement, Chinese settlement, and the general foreign settlement were formed. After that, Japan reclaimed the southern mudflats and expanded the theire settlement area, and advanced to the Joseon area(currently Sinheung-dong). In Japanese colonial era, modern Japanese urban landscapes were transplanted into the settlement area, centering on the Japanese modern gardens were distributed in the area around the center of the settlement area. Second, after examining the process of creating the garden for the Rikidake villa, Japanese Rikidake purchased a site for an orchard in Uri-tang, who was a major landowner in Incheon, to create the garden. At the time of Rikidake's residence, the garden was very large, measuring about 3,000 pyeong, and after liberation, it was acquired by Incheon City and used as Yulmok Children's Library. It was known as a rich village at the time of the opening of the port, and a garden was located at the highest point in Yulmok-dong, making it easy to see the Incheon Port area. Also, a spot located about 300 meters away from Rikidake's rice mill may have affected the location selection. Third, today's Rikidake villa has a Japanese-style house on a trapezoidal site, with a garden of about 990 square meters on the south side. Currently, it is possible to enter from the south and from Yulmok Children's Park in the north, but in the past, the main direction of the house was to view the Incheon Port, settlement area, and the Rikidake Rice Mill, so the house was located in front of the garden. The garden is a multi-faceted style with stone lanterns, tombstones, garden stones, and trees placed on each side, and is surrounded by arboreal plants such as attention, strobe pine, and maple trees, as well as royal azaleas. The view from the inside of the house was secured through shrub-oriented vegetation around the house.

The Process of Hillslope Denudation Since the Last Glacial Maximum Near Tangjeong-myeon, Asan-si, Central Korea (아산시(牙山市) 탕정면(湯井面) 일대(一帶) 최종빙기(最終氷期) 최성기(最盛期) 이후(以後) 구사면(丘斜面)의 삭박과정(削剝過程))

  • PARK, Ji-Hoon;JANG, Dong-Ho
    • Journal of The Geomorphological Association of Korea
    • /
    • v.15 no.2
    • /
    • pp.67-83
    • /
    • 2008
  • To find out the process of hillslope denudation since the Last Glacial Maximum in Asan area, we conducted the stratiform interpretation and carbon age measurements with the collected samples through trenching in the valley bottom of 'Agol' located in the lower stream of Magok stream. The results are as follows. 11 inorganic and 8 organic matter layers were confirmed at the point of trench MG1 in the subject area, 7 inorganic and 3 organic at the point of trench MG2, and 5 inorganic and 3 organic at the point of trench MG3, respectively. The frequency of hillslope denudation, hillslope mass movement, which had occurred in the unstable environment of back hillslope at the point of MG 1, was 11 times (8 times before about 2,900yrBP, twice in between about 2,900~1,900yrBP, and once after about 1,900yrBP) as a whole. The frequency of moor which had formed in the comparatively stable environment of back hillslope was 9 times (5 times before about 3,000yrBP, twice in between 3,000~2,800yrBP, and once in between 2,200~1,900yrBP) at minimum. The frequency of back hillslope denudation at the point of MG2 was totally 7 times (4 times before about 1,900yrBP and 3 times after about 1,900yrBP) and the moor formations were 3 times (twice before about 1,900yrBP and once after 1,900yrBP). The frequency of back hillslope denudation at the point of MG3 was totally 5 times (3 times before about 1,900yrBP and twice after about 1,900yrBP) and the moor formations were 3 times (twice before about 1,900yrBP and once after 1,900yrBP). The hillslope surrounded by valley bottom of 'Agol' was confirmed as the pile up of various inorganic matters by the mass movement such as sand or sandy gravel in the valley bottom of the subject area, formed not once but several times of denudation. We could know that the hillslope denudation cycle converged to the time period of $10^2{\sim}10^3$ years. These results will be an important basic data for restoring hillslope denudation process near Asan and changing climate of the Late Quaternary Period.

A Study on the Concept of Records-Archives and on the Definition of Archival Terms (기록물의 개념과 용어의 정의에 관한 연구)

  • Kim, Jung-Ha
    • The Korean Journal of Archival Studies
    • /
    • no.21
    • /
    • pp.3-40
    • /
    • 2009
  • It has passed ten years since modern records and archives management in our country launched. During times, it has dramatically developed in the fields of law, institution and education. However a study on the definition of records and archives was non be studied enough compared to development of various research fields. In fact the reason why study on the definition was non fulfilled is that some aspects such as historical, informational, archival perspective have been coexisting without order in Korea. This situation is the biggest barrier that archival science is to a disciplinary field. Historically, 'archivium' in Latin language had developed in starting of its means place, then whole entity of documents and those organic relations. In this point, archives is rigidly separate to material of Historical science which covers all of recorded. Unlike information which is produced in the process of intended themes and following its outputs like books, documents in archival science is made in the natural process of work. In addition, historical archives which finished the current and semi-current stage and transfer to the institute of permanent conservation after the process of selection so that it is historical and cultural value to satisfy its purpose of making. This changed trend is based on the Second World War and necessity of North American society which needs to effciency and transparency of work. In Korea, records and archives management has been dominantly affected by North American society and become a subject of not arrangement but of classification, not of transferring but of collection. It is also recognized as management of on formation on the all recorded or documents not as an whole documents and all organic relations. But the original type of recognition is the only technology, it cannot have dignity as a field of science.

Awareness of Pre-Service Elementary Teachers' on Science Teaching-Learning Lesson Plan (초등예비교사의 과학과 교수·학습 과정안 작성에 대한 인식)

  • Yong-Seob, Lee;Sun-Sik, Kim
    • Journal of the Korean Society of Earth Science Education
    • /
    • v.15 no.3
    • /
    • pp.335-344
    • /
    • 2022
  • This study was conducted for 4 weeks on the preparation of the science teaching/learning course plan for 109 students in 4 classes of the 2nd year intensive course at B University of Education. Pre-service elementary teachers attended a two-week field training practice after listening to a lecture on how to write a science teaching and learning course plan. Pre-service elementary teachers tried to find out about the selection of materials and the degree of connection between the course plan and the class to prepare the science teaching/learning course plan. The researcher completed the questionnaire by reviewing and deliberation on the questionnaire questions together with 4 pre-service elementary teachers. The questionnaire related to the writing of the science teaching and learning course plan consists of 8 questions. Preferred reference materials when writing the course plan, the level of interest in learning, the success or failure of the science course plan and class, the science preferred model, the evaluation method in unit time, and the science teaching and learning One's own efforts to write the course plan, the contents of this course are the science faculty. It is composed of the preparation of the learning process plan and how helpful it is to the class. The results of this study are as follows. First, it was found that elementary school pre-service elementary teachers preferred teacher guidance the most when drafting science teaching and learning curriculum plans. Second, it is recognized that the development stage is very important in the teaching and learning stage of the science department. Third, Pre-service elementary teachers believe that the science and teaching and learning process plan has a high correlation with the success of the class. Fourth, it was said that the student's level, the teacher's ability, and the appropriate lesson plan had the most influence on the class. Fifth, it was found that pre-service elementary teachers prefer the inquiry learning class model. Sixth, it was found that reports and activity papers were preferred for evaluation in 40-minute classes. Seventh, it was stated that the teaching and learning process plan is highly related to the class, so it will be studied and studied diligently. Eighth, the method of writing a science teaching and learning course plan based on the instructional design principle is interpreted as very beneficial.