• Title/Summary/Keyword: Existing Model

Search Result 9,663, Processing Time 0.036 seconds

Development of the Career Education Teaching Materials for the 'Information and Communication Technology and Our Life' Unit ('정보 통신 기술과 생활' 단원에서 진로교육 수업자료 개발)

  • Choi, Ji-Na;Lee, Yong-Jin
    • 대한공업교육학회지
    • /
    • v.37 no.1
    • /
    • pp.145-164
    • /
    • 2012
  • The purpose of this study is to develope the teaching materials of career education for the 'Information and Communication Technology and Our Life' unit in the technology education. As preparation phase, in order to choose the suitable contents for career education, we analyzed the technology education curriculum and 'Information and Communication Technology and our Life' unit of technology and home economics. And then we compared and analyzed the existing related researches. After content analysis of the teaching materials for career education, we mapped the contents into career education area. In the 'Design' step of teaching, we extracted the unit design components after analyzing 'Development in Information and Communication Technology' unit of eleven text books used in 2007 revised curriculum In the 'Introduction', 'Activity', 'Arrangement' steps of teaching, by applying the SHIP model, one of career education program model, we develop the teaching materials. Then, we get expert evaluation using questionnaire and improve the suitability of the teaching materials. The results are as followings: First, our teaching materials reflect the development history of information and communication technology well, show the features of career education, and are suitable to middle school students as the teaching materials. Second, our teaching materials can help students to face various jobs related with the development of Information and communication technology, to have more interests and exploring opportunities about 'Information and Communication Technology' subject. Third, our teaching materials can help teachers to use it for the career education of 'Information and Communication Technology and our Life' unit of 2007 revised curriculum in the class time. Our teaching materials can also be used in the extra activity related to career education and the Creative Experience Activities. Furthermore, since 2009 revised curriculum includes the career education unit in the 'Information and Communication Technology' subject, our teaching materials can be used partially as the teaching materials in the future.

Understanding the Mismatch between ERP and Organizational Information Needs and Its Responses: A Study based on Organizational Memory Theory (조직의 정보 니즈와 ERP 기능과의 불일치 및 그 대응책에 대한 이해: 조직 메모리 이론을 바탕으로)

  • Jeong, Seung-Ryul;Bae, Uk-Ho
    • Asia pacific journal of information systems
    • /
    • v.22 no.2
    • /
    • pp.21-38
    • /
    • 2012
  • Until recently, successful implementation of ERP systems has been a popular topic among ERP researchers, who have attempted to identify its various contributing factors. None of these efforts, however, explicitly recognize the need to identify disparities that can exist between organizational information requirements and ERP systems. Since ERP systems are in fact "packages" -that is, software programs developed by independent software vendors for sale to organizations that use them-they are designed to meet the general needs of numerous organizations, rather than the unique needs of a particular organization, as is the case with custom-developed software. By adopting standard packages, organizations can substantially reduce many of the potential implementation risks commonly associated with custom-developed software. However, it is also true that the nature of the package itself could be a risk factor as the features and functions of the ERP systems may not completely comply with a particular organization's informational requirements. In this study, based on the organizational memory mismatch perspective that was derived from organizational memory theory and cognitive dissonance theory, we define the nature of disparities, which we call "mismatches," and propose that the mismatch between organizational information requirements and ERP systems is one of the primary determinants in the successful implementation of ERP systems. Furthermore, we suggest that customization efforts as a coping strategy for mismatches can play a significant role in increasing the possibilities of success. In order to examine the contention we propose in this study, we employed a survey-based field study of ERP project team members, resulting in a total of 77 responses. The results of this study show that, as anticipated from the organizational memory mismatch perspective, the mismatch between organizational information requirements and ERP systems makes a significantly negative impact on the implementation success of ERP systems. This finding confirms our hypothesis that the more mismatch there is, the more difficult successful ERP implementation is, and thus requires more attention to be drawn to mismatch as a major failure source in ERP implementation. This study also found that as a coping strategy on mismatch, the effects of customization are significant. In other words, utilizing the appropriate customization method could lead to the implementation success of ERP systems. This is somewhat interesting because it runs counter to the argument of some literature and ERP vendors that minimized customization (or even the lack thereof) is required for successful ERP implementation. In many ERP projects, there is a tendency among ERP developers to adopt default ERP functions without any customization, adhering to the slogan of "the introduction of best practices." However, this study asserts that we cannot expect successful implementation if we don't attempt to customize ERP systems when mismatches exist. For a more detailed analysis, we identified three types of mismatches-Non-ERP, Non-Procedure, and Hybrid. Among these, only Non-ERP mismatches (a situation in which ERP systems cannot support the existing information needs that are currently fulfilled) were found to have a direct influence on the implementation of ERP systems. Neither Non-Procedure nor Hybrid mismatches were found to have significant impact in the ERP context. These findings provide meaningful insights since they could serve as the basis for discussing how the ERP implementation process should be defined and what activities should be included in the implementation process. They show that ERP developers may not want to include organizational (or business processes) changes in the implementation process, suggesting that doing so could lead to failed implementation. And in fact, this suggestion eventually turned out to be true when we found that the application of process customization led to higher possibilities of failure. From these discussions, we are convinced that Non-ERP is the only type of mismatch we need to focus on during the implementation process, implying that organizational changes must be made before, rather than during, the implementation process. Finally, this study found that among the various customization approaches, bolt-on development methods in particular seemed to have significantly positive effects. Interestingly again, this finding is not in the same line of thought as that of the vendors in the ERP industry. The vendors' recommendations are to apply as many best practices as possible, thereby resulting in the minimization of customization and utilization of bolt-on development methods. They particularly advise against changing the source code and rather recommend employing, when necessary, the method of programming additional software code using the computer language of the vendor. As previously stated, however, our study found active customization, especially bolt-on development methods, to have positive effects on ERP, and found source code changes in particular to have the most significant effects. Moreover, our study found programming additional software to be ineffective, suggesting there is much difference between ERP developers and vendors in viewpoints and strategies toward ERP customization. In summary, mismatches are inherent in the ERP implementation context and play an important role in determining its success. Considering the significance of mismatches, this study proposes a new model for successful ERP implementation, developed from the organizational memory mismatch perspective, and provides many insights by empirically confirming the model's usefulness.

  • PDF

The Influence of Online Social Networking on Individual Virtual Competence and Task Performance in Organizations (온라인 네트워킹 활동이 가상협업 역량 및 업무성과에 미치는 영향)

  • Suh, A-Young;Shin, Kyung-Shik
    • Asia pacific journal of information systems
    • /
    • v.22 no.2
    • /
    • pp.39-69
    • /
    • 2012
  • With the advent of communication technologies including electronic collaborative tools and conferencing systems provided over the Internet, virtual collaboration is becoming increasingly common in organizations. Virtual collaboration refers to an environment in which the people working together are interdependent in their tasks, share responsibility for outcomes, are geographically dispersed, and rely on mediated rather than face-to face, communication to produce an outcome. Research suggests that new sets of individual skill, knowledge, and ability (SKAs) are required to perform effectively in today's virtualized workplace, which is labeled as individual virtual competence. It is also argued that use of online social networking sites may influence not only individuals' daily lives but also their capability to manage their work-related relationships in organizations, which in turn leads to better performance. The existing research regarding (1) the relationship between virtual competence and task performance and (2) the relationship between online networking and task performance has been conducted based on different theoretical perspectives so that little is known about how online social networking and virtual competence interplay to predict individuals' task performance. To fill this gap, this study raises the following research questions: (1) What is the individual virtual competence required for better adjustment to the virtual collaboration environment? (2) How does online networking via diverse social network service sites influence individuals' task performance in organizations? (3) How do the joint effects of individual virtual competence and online networking influence task performance? To address these research questions, we first draw on the prior literature and derive four dimensions of individual virtual competence that are related with an individual's self-concept, knowledge and ability. Computer self-efficacy is defined as the extent to which an individual beliefs in his or her ability to use computer technology broadly. Remotework self-efficacy is defined as the extent to which an individual beliefs in his or her ability to work and perform joint tasks with others in virtual settings. Virtual media skill is defined as the degree of confidence of individuals to function in their work role without face-to-face interactions. Virtual social skill is an individual's skill level in using technologies to communicate in virtual settings to their full potential. It should be noted that the concept of virtual social skill is different from the self-efficacy and captures an individual's cognition-based ability to build social relationships with others in virtual settings. Next, we discuss how online networking influences both individual virtual competence and task performance based on the social network theory and the social learning theory. We argue that online networking may enhance individuals' capability in expanding their social networks with low costs. We also argue that online networking may enable individuals to learn the necessary skills regarding how they use technological functions, communicate with others, and share information and make social relations using the technical functions provided by electronic media, consequently increasing individual virtual competence. To examine the relationships among online networking, virtual competence, and task performance, we developed research models (the mediation, interaction, and additive models, respectively) by integrating the social network theory and the social learning theory. Using data from 112 employees of a virtualized company, we tested the proposed research models. The results of analysis partly support the mediation model in that online social networking positively influences individuals' computer self-efficacy, virtual social skill, and virtual media skill, which are key predictors of individuals' task performance. Furthermore, the results of the analysis partly support the interaction model in that the level of remotework self-efficacy moderates the relationship between online social networking and task performance. The results paint a picture of people adjusting to virtual collaboration that constrains and enables their task performance. This study contributes to research and practice. First, we suggest a shift of research focus to the individual level when examining virtual phenomena and theorize that online social networking can enhance individual virtual competence in some aspects. Second, we replicate and advance the prior competence literature by linking each component of virtual competence and objective task performance. The results of this study provide useful insights into how human resource responsibilities assess employees' weakness and strength when they organize virtualized groups or projects. Furthermore, it provides managers with insights into the kinds of development or training programs that they can engage in with their employees to advance their ability to undertake virtual work.

  • PDF

A Study of the Environmental Consciousness Influences on the Psychological Reaction of Forest Ecotourists (환경의식에 따른 산림생태관광객의 심리적 반응에 관한 연구)

  • Yan, Guang-Hao;Na, Seung-Hwa
    • Journal of Distribution Science
    • /
    • v.10 no.1
    • /
    • pp.43-52
    • /
    • 2012
  • With the slowdown in environmental issues and the change of environmental consciousness, ecotourism is being discussed in various social fields. Ecotourism is being popularized for environmental protection, and now it is becoming a mainstream product from one of mass tourism. Ecotourism's emphasis on sustainable development in the tourism destination's society, economy, and environment, through ecotourism study and education, enable people to understand the core value of the ecological environment. 2011 was nominated as "the Year of World Forest" by the UN. In the recent years, forests are becoming increasingly important with their own values and functions in environment, economy, society, and culture. In particular, the global environmental issues caused by climate change are becoming an international agenda. Forests are the only effective solution for the carbon dioxide that causes global warming. Moreover, forests constitute a major part of ecotourism, and are now most used by ecotourists. For example, Korea, wherein 60% of the land is forest, attracts ecotourists. With the increasing interests in environment, the number of tourists visiting the ecosystem forest, which is highly valued for its conservation, is increasing significantly every year and is receiving considerable attention from the government. However, poor facilities in the forest ecotourism sites and improper market strategies are the reasons for the poor running of these sites. Furthermore, tourists' environmental awareness affects ecology environmental pollution or the optimization of forest ecotourism. In order to verify the relationships among tourist attractiveness, environmental consciousness, charm degrees of the attractions, and attitudes after tours, we established some scales based on existing research achievement. Then, using these scales, the researcher completed the questionnaire survey. From December 20, 2010 to February 20, 2011, after conducting surveys for 12 weeks, we finally obtained 582 valid questionnaires, from a total of 700 questionnaires, that could be used in statistical analysis. First, for the method of research and analysis, the researcher initially applied the Cronbach's (Alpha) for verifying the reliability, and subsequently applied the Exploratory factor analysis for verifying the validity. Second, in order to analyze the demographics, the researcher makes use of the Frequency analysis for the AMOS, measurement model, structural equation model computing, and also utilizes construct validity, convergent validity, discriminant validity, and nomological validity. Third, for the analysis of the ecotourists' environmental consciousness, impacts on tourist attractiveness, charm degrees of the attractions, and attitudes after the tour, the researcher uses AMOS 19, with the path analysis and equation of structure. After the research, researchers found that high awareness of natural protection lead to high tourist motivation and satisfaction and more positive attitude after the tour. Moreover, this research shows the psychological and behavioral reactions of the ecotourists to the ecotourist development. Accordingly, environmental consciousness does not affect the tourist attractiveness that has been interpreted as significant. Furthermore, people should focus on the change of natural protection consciousness and psychological reaction of ecotourists while ensuring the sustainable development of ecotourists and developing some ecotourist programs.

  • PDF

Improving Bidirectional LSTM-CRF model Of Sequence Tagging by using Ontology knowledge based feature (온톨로지 지식 기반 특성치를 활용한 Bidirectional LSTM-CRF 모델의 시퀀스 태깅 성능 향상에 관한 연구)

  • Jin, Seunghee;Jang, Heewon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.253-266
    • /
    • 2018
  • This paper proposes a methodology applying sequence tagging methodology to improve the performance of NER(Named Entity Recognition) used in QA system. In order to retrieve the correct answers stored in the database, it is necessary to switch the user's query into a language of the database such as SQL(Structured Query Language). Then, the computer can recognize the language of the user. This is the process of identifying the class or data name contained in the database. The method of retrieving the words contained in the query in the existing database and recognizing the object does not identify the homophone and the word phrases because it does not consider the context of the user's query. If there are multiple search results, all of them are returned as a result, so there can be many interpretations on the query and the time complexity for the calculation becomes large. To overcome these, this study aims to solve this problem by reflecting the contextual meaning of the query using Bidirectional LSTM-CRF. Also we tried to solve the disadvantages of the neural network model which can't identify the untrained words by using ontology knowledge based feature. Experiments were conducted on the ontology knowledge base of music domain and the performance was evaluated. In order to accurately evaluate the performance of the L-Bidirectional LSTM-CRF proposed in this study, we experimented with converting the words included in the learned query into untrained words in order to test whether the words were included in the database but correctly identified the untrained words. As a result, it was possible to recognize objects considering the context and can recognize the untrained words without re-training the L-Bidirectional LSTM-CRF mode, and it is confirmed that the performance of the object recognition as a whole is improved.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

Development of Multimedia Annotation and Retrieval System using MPEG-7 based Semantic Metadata Model (MPEG-7 기반 의미적 메타데이터 모델을 이용한 멀티미디어 주석 및 검색 시스템의 개발)

  • An, Hyoung-Geun;Koh, Jae-Jin
    • The KIPS Transactions:PartD
    • /
    • v.14D no.6
    • /
    • pp.573-584
    • /
    • 2007
  • As multimedia information recently increases fast, various types of retrieval of multimedia data are becoming issues of great importance. For the efficient multimedia data processing, semantics based retrieval techniques are required that can extract the meaning contents of multimedia data. Existing retrieval methods of multimedia data are annotation-based retrieval, feature-based retrieval and annotation and feature integration based retrieval. These systems take annotator a lot of efforts and time and we should perform complicated calculation for feature extraction. In addition. created data have shortcomings that we should go through static search that do not change. Also, user-friendly and semantic searching techniques are not supported. This paper proposes to develop S-MARS(Semantic Metadata-based Multimedia Annotation and Retrieval System) which can represent and extract multimedia data efficiently using MPEG-7. The system provides a graphical user interface for annotating, searching, and browsing multimedia data. It is implemented on the basis of the semantic metadata model to represent multimedia information. The semantic metadata about multimedia data is organized on the basis of multimedia description schema using XML schema that basically comply with the MPEG-7 standard. In conclusion. the proposed scheme can be easily implemented on any multimedia platforms supporting XML technology. It can be utilized to enable efficient semantic metadata sharing between systems, and it will contribute to improving the retrieval correctness and the user's satisfaction on embedding based multimedia retrieval algorithm method.

The Comparison of Existing Synthetic Unit Hydrograph Method in Korea (국내 기존 합성단위도 방법의 비교)

  • Jeong, Seong-Won;Mun, Jang-Won
    • Journal of Korea Water Resources Association
    • /
    • v.34 no.6
    • /
    • pp.659-672
    • /
    • 2001
  • Generally, design flood for a hydraulic structure is estimated using statistical analysis of runoff data. However, due to the lack of runoff data, it is difficult that the statistical method is applied for estimation of design flood. In this case, the synthetic unit hydrograph method is used generally and the models such as NYMO method, Snyder method, SCS method, and HYMO method have been widely used in Korea. In this study, these methods and KICT method, which is developed in year 2000, are compared and analyzed in 10 study areas. Firstly, peak flow and peak time of representative unit hydrograph and synthetic unit hydrograph in study area are compared, and secondly, the shape of unit hydrograph is compared using a root mean square error(RMSE). In Nakayasu method developed in Japan, synthetic unit hydrograph is very different from peak flow, peak time, and the shape of representative unit hydrograph, and KICT method(2000) is superior to others. Also, KICT method(2000) is superior to others in the aspects of using hydrologic and topographical data. Therefore, Nakayasu method is not a proper in hydrological practice. Moreover, it is considered that KICT model is a better method for the estimation of design flood. However, if other model, i.e. SCS method, Nakayasu method, and HYMO method, is used, parameters or regression equations must be adjusted by analysis of real data in Korea.

  • PDF

A Development and Application of the Landscape Evaluation Model Based on the Biotope Classification (비오톱 유형분류를 기반으로 한 경관평가 모형개발 및 적용)

  • Park, Cheon-Jin;Ra, Jung-Hwa;Cho, Hyun-Ju;Kim, Jin-Hyo;Kwon, Oh-Sung
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.40 no.4
    • /
    • pp.114-126
    • /
    • 2012
  • The purpose of this study is to find ways of the view evaluation of biotope classification before development by selecting an area, which is as large as about $10.0km^2$ around Non Gong Up, Auk Po Myun, Dalsung Gun, Daugu where the large project has been planned, as a subject of this study. The results of this study are as follows. Because of the classification of biotope, there are 23 kinds of types that are subdivided into 140 types. Three surveys for selecting the assessment indicators were performed. The first survey analyzed the importance of 22 selected assessment indicators based on the evaluation of an existing literature review and on the spot research. The second survey performed factor analysis and reclassified the value indicators. The third survey computed additive values of the selected assessment indicators. It used a method of standardizing the average importance of indicators by making their sum equal by 10. Theses additive values were then multiplied by each grade of indicators in order to make a final evaluation. The number of assessment indicators finally selected through the survey of asking specialist is vitality elements, visual obstructs elements etc 19. According to the result of evaluation of 1st, 1 grade spaces which especially valuable is analyzed that 7 spaces, 2 grade spaces for 4, 3 grade spaces for 5, 4 grade space for 2, 5 grade space for 5. Because of the evaluation of 2st, 1 grade spaces which especially valuable(1a, 1b) is analyzed that 15 spaces, 2 grade spaces which valuable is analyzed that 28 space. As the evaluation of site suitability model of this study couldn't have high applicability to other similar area because of having only one site as a subject, it is needed to do synthesize and standardization of various examples to have higher objectivity later.

Increasing Accuracy of Stock Price Pattern Prediction through Data Augmentation for Deep Learning (데이터 증강을 통한 딥러닝 기반 주가 패턴 예측 정확도 향상 방안)

  • Kim, Youngjun;Kim, Yeojeong;Lee, Insun;Lee, Hong Joo
    • The Journal of Bigdata
    • /
    • v.4 no.2
    • /
    • pp.1-12
    • /
    • 2019
  • As Artificial Intelligence (AI) technology develops, it is applied to various fields such as image, voice, and text. AI has shown fine results in certain areas. Researchers have tried to predict the stock market by utilizing artificial intelligence as well. Predicting the stock market is known as one of the difficult problems since the stock market is affected by various factors such as economy and politics. In the field of AI, there are attempts to predict the ups and downs of stock price by studying stock price patterns using various machine learning techniques. This study suggest a way of predicting stock price patterns based on the Convolutional Neural Network(CNN) among machine learning techniques. CNN uses neural networks to classify images by extracting features from images through convolutional layers. Therefore, this study tries to classify candlestick images made by stock data in order to predict patterns. This study has two objectives. The first one referred as Case 1 is to predict the patterns with the images made by the same-day stock price data. The second one referred as Case 2 is to predict the next day stock price patterns with the images produced by the daily stock price data. In Case 1, data augmentation methods - random modification and Gaussian noise - are applied to generate more training data, and the generated images are put into the model to fit. Given that deep learning requires a large amount of data, this study suggests a method of data augmentation for candlestick images. Also, this study compares the accuracies of the images with Gaussian noise and different classification problems. All data in this study is collected through OpenAPI provided by DaiShin Securities. Case 1 has five different labels depending on patterns. The patterns are up with up closing, up with down closing, down with up closing, down with down closing, and staying. The images in Case 1 are created by removing the last candle(-1candle), the last two candles(-2candles), and the last three candles(-3candles) from 60 minutes, 30 minutes, 10 minutes, and 5 minutes candle charts. 60 minutes candle chart means one candle in the image has 60 minutes of information containing an open price, high price, low price, close price. Case 2 has two labels that are up and down. This study for Case 2 has generated for 60 minutes, 30 minutes, 10 minutes, and 5minutes candle charts without removing any candle. Considering the stock data, moving the candles in the images is suggested, instead of existing data augmentation techniques. How much the candles are moved is defined as the modified value. The average difference of closing prices between candles was 0.0029. Therefore, in this study, 0.003, 0.002, 0.001, 0.00025 are used for the modified value. The number of images was doubled after data augmentation. When it comes to Gaussian Noise, the mean value was 0, and the value of variance was 0.01. For both Case 1 and Case 2, the model is based on VGG-Net16 that has 16 layers. As a result, 10 minutes -1candle showed the best accuracy among 60 minutes, 30 minutes, 10 minutes, 5minutes candle charts. Thus, 10 minutes images were utilized for the rest of the experiment in Case 1. The three candles removed from the images were selected for data augmentation and application of Gaussian noise. 10 minutes -3candle resulted in 79.72% accuracy. The accuracy of the images with 0.00025 modified value and 100% changed candles was 79.92%. Applying Gaussian noise helped the accuracy to be 80.98%. According to the outcomes of Case 2, 60minutes candle charts could predict patterns of tomorrow by 82.60%. To sum up, this study is expected to contribute to further studies on the prediction of stock price patterns using images. This research provides a possible method for data augmentation of stock data.

  • PDF