• Title/Summary/Keyword: Model Generalization

Search Result 434, Processing Time 0.027 seconds

A Study on the Control System of Maximum Demand Power Using Neural Network and Fuzzy Logic (신경망과 퍼지논리를 이용한 최대수요전력 제어시스템에 관한연구)

  • 조성원
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.9 no.4
    • /
    • pp.420-425
    • /
    • 1999
  • The maximum demand controller is an electrical equipment installed at the consumer side of power system for monitoring the electrical energy consumed during every integrating period and preventing the target maximum demand (MD) being exceeded by disconnecting sheddable loads. By avoiding the peak loads and spreading the energy requirement the controller contributes to maximizing the utility factor of the generator systems. It results in not only saving the energy but also reducing the budget for constructing the natural base facilities by keeping thc number of generating plants ~ninimumT. he conventional MD controllers often bring about the large number of control actions during the every inteyating period and/or undesirable loaddisconnecting operations during the beginning stage of the integrating period. These make the users aviod the MD controllers. In this paper. fuzzy control technique is used to get around the disadvantages of the conventional MD control system. The proposed MD controller consists of the predictor module and the fuzzy MD control module. The proposed forecasting method uses the SOFM neural network model, differently from time series analysis, and thus it has inherent advantages of neural network such as parallel processing, generalization and robustness. The MD fuzzy controller determines the sensitivity of control action based on the time closed to the end of the integrating period and the urgency of the load interrupting action along the predicted demand reaching the target. The experimental results show that the proposed method has more accurate forecastinglcontrol performance than the previous methods.

  • PDF

Isolation and Identification of the Causal Agents of Red Pepper Wilting Symptoms (고추 시듦 증상을 일으키는 원인균의 분리 및 동정)

  • Lee, Kyeong Hee;Kim, Heung Tae
    • Research in Plant Disease
    • /
    • v.28 no.3
    • /
    • pp.143-151
    • /
    • 2022
  • In order to investigate the cause of wilting symptoms in red pepper field of Korea, the frequency of occurrence of red peppers showing wilting symptoms was investigated in pepper cultivation fields in Goesan, Chungcheongbuk-do for 5 years from 2010 to 2014. There was a difference in the frequency of wilting symptoms depending on the year of investigation, but the frequency of occurrence increased as the investigation period passed from June and July to August. During this period, Ralstonia solanacearum causing the bacterial wilt was isolated at a rate four times higher than Phytophthora capsica causing the Phytophthora late blight. In wilted peppers collected in Goesan of Chungbuk and Andong of Gyeongbuk in 2013 and 2014, R. solanacearum and P. capsici were isolated from 20.3% and 3.8% of the total fields, respectively. In the year with a high rate of wilting symptoms, the average temperature was high, and the disease occurrence date of the bacterial wilt, estimated with disease forecasting model, was also fast. The inconsistency between the number of days at risk of Phytophthora late blight and the frequency of occurrence of wither symptoms is thought to be due to the generalization of the use of cultivars resistant to the Phytophthora late blight in the pepper field. In our study, the wilting symptoms were caused by the bacterial wilt caused by R. solanacearum rather than the Phytophthora late blight caused by P. capsica, which is possibly caused by increasing cultivation of pepper varieties resistant to the Phytophthora late blight in the field.

A Data-driven Classifier for Motion Detection of Soldiers on the Battlefield using Recurrent Architectures and Hyperparameter Optimization (순환 아키텍쳐 및 하이퍼파라미터 최적화를 이용한 데이터 기반 군사 동작 판별 알고리즘)

  • Joonho Kim;Geonju Chae;Jaemin Park;Kyeong-Won Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.107-119
    • /
    • 2023
  • The technology that recognizes a soldier's motion and movement status has recently attracted large attention as a combination of wearable technology and artificial intelligence, which is expected to upend the paradigm of troop management. The accuracy of state determination should be maintained at a high-end level to make sure of the expected vital functions both in a training situation; an evaluation and solution provision for each individual's motion, and in a combat situation; overall enhancement in managing troops. However, when input data is given as a timer series or sequence, existing feedforward networks would show overt limitations in maximizing classification performance. Since human behavior data (3-axis accelerations and 3-axis angular velocities) handled for military motion recognition requires the process of analyzing its time-dependent characteristics, this study proposes a high-performance data-driven classifier which utilizes the long-short term memory to identify the order dependence of acquired data, learning to classify eight representative military operations (Sitting, Standing, Walking, Running, Ascending, Descending, Low Crawl, and High Crawl). Since the accuracy is highly dependent on a network's learning conditions and variables, manual adjustment may neither be cost-effective nor guarantee optimal results during learning. Therefore, in this study, we optimized hyperparameters using Bayesian optimization for maximized generalization performance. As a result, the final architecture could reduce the error rate by 62.56% compared to the existing network with a similar number of learnable parameters, with the final accuracy of 98.39% for various military operations.

Research on Usability of Mobile Food Delivery Application: Focusing on Korean Application and Chinese Application (모바일 배달 애플리케이션 사용성 평가 연구: 한국(배달의민족)과 중국(어러머)을 중심으로)

  • Yang Tian;Eunkyung Kweon;Sangmi Chai
    • Information Systems Review
    • /
    • v.20 no.1
    • /
    • pp.1-16
    • /
    • 2018
  • The development and generalization of the Internet increased the popularity of food delivery service applications in Korea. The food delivery market based on online-to-offline service is growing rapidly. This study compares the usability of Korean food delivery service application between that of Chinese food delivery service application. This study suggests improvement points for Korean food delivery service applications. To conduct this study, we explore the status of various food delivery service applications and conduct interviews and surveys based on the honeycomb model developed by Peter Morville. This study obtained the following results. First, all restaurants participating in the Korean food delivery service must be able to accept order through the application. Second, the shopping cart function must be able to accept order of all restaurants simultaneously. Third, when users look for menu recommendation, their purchase history and shopping cart functions should appear at the first page of the website. Users should be able to perceive the improved usability of the website using those functions. Fourth, when the search window is fixed on the top of each page, users should be able to find the information they need. Fifth, the application must allow users to find the exact location of the delivery person and the estimated delivery time. Finally, the restaurants'address should be disclosed and fast delivery time should be confirmed to enhance users'trust on the application. This study contributes to academia and industry by suggesting useful insight into food delivery service applications and improving the point of food delivery service application in Korea.

Fulfilling the Export Potential of Agricultural Production in the Context of Aggravating Global Food Crisis

  • Hassan Ali Al-Ababneh;Ainur Osmonova;Ilona Dumanska;Petro Matkovskyi;Andriy Kalynovskyy
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.7
    • /
    • pp.128-142
    • /
    • 2024
  • Creation and implementation of export-oriented strategy is an urgent issue of economic development of any country. In an export-oriented model of economic development, exports should be a means of promoting economic growth and a tool to strengthen existing and potential competitive advantages. Agricultural production is the key factor in exports and the source of foreign exchange earnings in many countries. However, the export potential of agricultural producers may be inefficiently fulfilled due to the heterogeneity of countries in terms of economic development, trade relations and border policy. The aim of the research is to study the nature, main trends and problematic aspects of fulfilling the export potential of agricultural production in the context of aggravating food crisis. The study involved general scientific methods (induction and deduction, description, analysis, synthesis, generalization) and special (statistical method, economic analysis, descriptive statistics and interstate comparisons, graphical method). The need to ensure food security by countries around the world urges the importance of the agricultural sector as a catalyst for economic development, sources of foreign exchange earnings, investment direction, etc. The study of agricultural specialization led to the conclusion that wheat and sugar are goods with the highest export potential. It is substantiated that the countries of South America, OECD, North America and Europe have the highest level of realization of export potential of agricultural production, and African countries are import-dependent. In addition, the low export orientation of Africa and Asia due to the peculiarities of their natural and climatic conditions is established based on the assessment of export-import operations in the regional context. The internal and external export potential of each of the regions is analysed. Economic and mathematical simulation of assessing the impact of the most important factors on the wheat exports volumes was applied, which allowed predicting wheat exports volume and making sound management decisions regarding the realization of the export potential of agricultural companies. The inverse correlation between the exports volume and wheat consumption per capita, and the direct correlation between the effective size and area of land used for wheat cultivation was established through the correlation and regression analysis.

The Impact of Service Level Management(SLM) Process Maturity on Information Systems Success in Total Outsourcing: An Analytical Case Study (토털 아웃소싱 환경 하에서 IT서비스 수준관리(Service Level Management) 프로세스 성숙도가 정보시스템 성공에 미치는 영향에 관한 분석적 사례연구)

  • Cho, Geun Su;An, Joon Mo;Min, Hyoung Jin
    • Asia pacific journal of information systems
    • /
    • v.23 no.2
    • /
    • pp.21-39
    • /
    • 2013
  • As the utilization of information technology and the turbulence of technological change increase in organizations, the adoption of IT outsourcing also grows to manage IT resource more effectively and efficiently. In this new way of IT management technique, service level management(SLM) process becomes critical to derive success from the outsourcing in the view of end users in organization. Even though much of the research on service level management or agreement have been done during last decades, the performance of the service level management process have not been evaluated in terms of final objectives of the management efforts or success from the view of end-users. This study explores the relationship between SLM maturity and IT outsourcing success from the users' point of view by a analytical case study in four client organizations under an IT outsourcing vendor, which is a member company of a major Korean conglomerate. For setting up a model for the analysis, previous researches on service level management process maturity and information systems success are reviewed. In particular, information systems success from users' point of view are reviewed based the DeLone and McLean's study, which is argued and accepted as a comprehensively tested model of information systems success currently. The model proposed in this study argues that SLM process maturity influences information systems success, which is evaluated in terms of information quality, systems quality, service quality, and net effect proposed by DeLone and McLean. SLM process maturity can be measured in planning process, implementation process and operation and evaluation process. Instruments for measuring the factors in the proposed constructs of information systems success and SL management process maturity were collected from previous researches and evaluated for securing reliability and validity, utilizing appropriate statistical methods and pilot tests before exploring the case study. Four cases from four different companies under one vendor company were utilized for the analysis. All of the cases had been contracted in SLA(Service Level Agreement) and had implemented ITIL(IT Infrastructure Library), Six Sigma and BSC(Balanced Scored Card) methods since last several years, which means that all the client organizations pursued concerted efforts to acquire quality services from IT outsourcing from the organization and users' point of view. For comparing the differences among the four organizations in IT out-sourcing sucess, T-test and non-parametric analysis have been applied on the data set collected from the organization using survey instruments. The process maturities of planning and implementation phases of SLM are found not to influence on any dimensions of information systems success from users' point of view. It was found that the SLM maturity in the phase of operations and evaluation could influence systems quality only from users' view. This result seems to be quite against the arguments in IT outsourcing practices in the fields, which emphasize usually the importance of planning and implementation processes upfront in IT outsourcing projects. According to after-the-fact observation by an expert in an organization participating in the study, their needs and motivations for outsourcing contracts had been quite familiar already to the vendors as long-term partners under a same conglomerate, so that the maturity in the phases of planning and implementation seems not to be differentiating factors for the success of IT outsourcing. This study will be the foundation for the future research in the area of IT outsourcing management and success, in particular in the service level management. And also, it could guide managers in practice in IT outsourcing management to focus on service level management process in operation and evaluation stage especially for long-term outsourcing contracts under very unique context like Korean IT outsourcing projects. This study has some limitations in generalization because the sample size is small and the context itself is confined in an unique environment. For future exploration, survey based research could be designed and implemented.

  • PDF

Dynamic Limit and Predatory Pricing Under Uncertainty (불확실성하(不確實性下)의 동태적(動態的) 진입제한(進入制限) 및 약탈가격(掠奪價格) 책정(策定))

  • Yoo, Yoon-ha
    • KDI Journal of Economic Policy
    • /
    • v.13 no.1
    • /
    • pp.151-166
    • /
    • 1991
  • In this paper, a simple game-theoretic entry deterrence model is developed that integrates both limit pricing and predatory pricing. While there have been extensive studies which have dealt with predation and limit pricing separately, no study so far has analyzed these closely related practices in a unified framework. Treating each practice as if it were an independent phenomenon is, of course, an analytical necessity to abstract from complex realities. However, welfare analysis based on such a model may give misleading policy implications. By analyzing limit and predatory pricing within a single framework, this paper attempts to shed some light on the effects of interactions between these two frequently cited tactics of entry deterrence. Another distinctive feature of the paper is that limit and predatory pricing emerge, in equilibrium, as rational, profit maximizing strategies in the model. Until recently, the only conclusion from formal analyses of predatory pricing was that predation is unlikely to take place if every economic agent is assumed to be rational. This conclusion rests upon the argument that predation is costly; that is, it inflicts more losses upon the predator than upon the rival producer, and, therefore, is unlikely to succeed in driving out the rival, who understands that the price cutting, if it ever takes place, must be temporary. Recently several attempts have been made to overcome this modelling difficulty by Kreps and Wilson, Milgram and Roberts, Benoit, Fudenberg and Tirole, and Roberts. With the exception of Roberts, however, these studies, though successful in preserving the rationality of players, still share one serious weakness in that they resort to ad hoc, external constraints in order to generate profit maximizing predation. The present paper uses a highly stylized model of Cournot duopoly and derives the equilibrium predatory strategy without invoking external constraints except the assumption of asymmetrically distributed information. The underlying intuition behind the model can be summarized as follows. Imagine a firm that is considering entry into a monopolist's market but is uncertain about the incumbent firm's cost structure. If the monopolist has low cost, the rival would rather not enter because it would be difficult to compete with an efficient, low-cost firm. If the monopolist has high costs, however, the rival will definitely enter the market because it can make positive profits. In this situation, if the incumbent firm unwittingly produces its monopoly output, the entrant can infer the nature of the monopolist's cost by observing the monopolist's price. Knowing this, the high cost monopolist increases its output level up to what would have been produced by a low cost firm in an effort to conceal its cost condition. This constitutes limit pricing. The same logic applies when there is a rival competitor in the market. Producing a high cost duopoly output is self-revealing and thus to be avoided. Therefore, the firm chooses to produce the low cost duopoly output, consequently inflicting losses to the entrant or rival producer, thus acting in a predatory manner. The policy implications of the analysis are rather mixed. Contrary to the widely accepted hypothesis that predation is, at best, a negative sum game, and thus, a strategy that is unlikely to be played from the outset, this paper concludes that predation can be real occurence by showing that it can arise as an effective profit maximizing strategy. This conclusion alone may imply that the government can play a role in increasing the consumer welfare, say, by banning predation or limit pricing. However, the problem is that it is rather difficult to ascribe any welfare losses to these kinds of entry deterring practices. This difficulty arises from the fact that if the same practices have been adopted by a low cost firm, they could not be called entry-deterring. Moreover, the high cost incumbent in the model is doing exactly what the low cost firm would have done to keep the market to itself. All in all, this paper suggests that a government injunction of limit and predatory pricing should be applied with great care, evaluating each case on its own basis. Hasty generalization may work to the detriment, rather than the enhancement of consumer welfare.

  • PDF

A Study on the Differences of Information Diffusion Based on the Type of Media and Information (매체와 정보유형에 따른 정보확산 차이에 대한 연구)

  • Lee, Sang-Gun;Kim, Jin-Hwa;Baek, Heon;Lee, Eui-Bang
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.133-146
    • /
    • 2013
  • While the use of internet is routine nowadays, users receive and share information through a variety of media. Through the use of internet, information delivery media is diversifying from traditional media of one-way communication, such as newspaper, TV, and radio, into media of two-way communication. In contrast of traditional media, blogs enable individuals to directly upload and share news, which can be considered to have a differential speed of information diffusion than news media that convey information unilaterally. Therefore this Study focused on the difference between online news and social media blogs. Moreover, there are variations in the speed of information diffusion because that information closely related to one person boosts communications between individuals. We believe that users' standard of evaluation would change based on the types of information. As well, the speed of information diffusion would change based on the level of proximity. Therefore, the purpose of this study is to examine the differences in information diffusion based on the types of media. And then information is segmentalized and an examination is done to see how information diffusion differentiates based on the types of information. This study used the Bass diffusion model, which has been frequently used because this model has higher explanatory power than other models by explaining diffusion of market through innovation effect and imitation effect. Also this model has been applied a lot in other information diffusion related studies. The Bass diffusion model includes an innovation effect and an imitation effect. Innovation effect measures the early-stage impact, while the imitation effect measures the impact of word of mouth at the later stage. According to Mahajan et al. (2000), Innovation effect is emphasized by usefulness and ease-of-use, as well Imitation effect is emphasized by subjective norm and word-of-mouth. Also, according to Lee et al. (2011), Innovation effect is emphasized by mass communication. According to Moore and Benbasat (1996), Innovation effect is emphasized by relative advantage. Because Imitation effect is adopted by within-group influences and Innovation effects is adopted by product's or service's innovation. Therefore, ours study compared online news and social media blogs to examine the differences between media. We also choose different types of information including entertainment related information "Psy Gentelman", Current affair news "Earthquake in Sichuan, China", and product related information "Galaxy S4" in order to examine the variations on information diffusion. We considered that users' information proximity alters based on the types of information. Hence, we chose the three types of information mentioned above, which have different level of proximity from users' standpoint, in order to examine the flow of information diffusion. The first conclusion of this study is that different media has similar effect on information diffusion, even the types of media of information provider are different. Information diffusion has only been distinguished by a disparity between proximity of information. Second, information diffusions differ based on types of information. From the standpoint of users, product and entertainment related information has high imitation effect because of word of mouth. On the other hand, imitation effect dominates innovation effect on Current affair news. From the results of this study, the flow changes of information diffusion is examined and be applied to practical use. This study has some limitations, and those limitations would be able to provide opportunities and suggestions for future research. Presenting the difference of Information diffusion according to media and proximity has difficulties for generalization of theory due to small sample size. Therefore, if further studies adopt to a request for an increase of sample size and media diversity, difference of the information diffusion according to media type and information proximity could be understood more detailed.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

Developing and Applying the Questionnaire to Measure Science Core Competencies Based on the 2015 Revised National Science Curriculum (2015 개정 과학과 교육과정에 기초한 과학과 핵심역량 조사 문항의 개발 및 적용)

  • Ha, Minsu;Park, HyunJu;Kim, Yong-Jin;Kang, Nam-Hwa;Oh, Phil Seok;Kim, Mi-Jum;Min, Jae-Sik;Lee, Yoonhyeong;Han, Hyo-Jeong;Kim, Moogyeong;Ko, Sung-Woo;Son, Mi-Hyun
    • Journal of The Korean Association For Science Education
    • /
    • v.38 no.4
    • /
    • pp.495-504
    • /
    • 2018
  • This study was conducted to develop items to measure scientific core competency based on statements of scientific core competencies presented in the 2015 revised national science curriculum and to identify the validity and reliability of the newly developed items. Based on the explanations of scientific reasoning, scientific inquiry ability, scientific problem-solving ability, scientific communication ability, participation/lifelong learning in science presented in the 2015 revised national science curriculum, 25 items were developed by five science education experts. To explore the validity and reliability of the developed items, data were collected from 11,348 students in elementary, middle, and high schools nationwide. The content validity, substantive validity, the internal structure validity, and generalization validity proposed by Messick (1995) were examined by various statistical tests. The results of the MNSQ analysis showed that there were no nonconformity in the 25 items. The confirmatory factor analysis using the structural equation modeling revealed that the five-factor model was a suitable model. The differential item functioning analyses by gender and school level revealed that the nonconformity DIF value was found in only two out of 175 cases. The results of the multivariate analysis of variance by gender and school level showed significant differences of test scores between schools and genders, and the interaction effect was also significant. The assessment items of science core competency based on the 2015 revised national science curriculum are valid from a psychometric point of view and can be used in the science education field.