• Title/Summary/Keyword: optimal systems

Search Result 6,723, Processing Time 0.037 seconds

An Empirical Study on Statistical Optimization Model for the Portfolio Construction of Sponsored Search Advertising(SSA) (키워드검색광고 포트폴리오 구성을 위한 통계적 최적화 모델에 대한 실증분석)

  • Yang, Hognkyu;Hong, Juneseok;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.167-194
    • /
    • 2019
  • This research starts from the four basic concepts of incentive incompatibility, limited information, myopia and decision variable which are confronted when making decisions in keyword bidding. In order to make these concept concrete, four framework approaches are designed as follows; Strategic approach for the incentive incompatibility, Statistical approach for the limited information, Alternative optimization for myopia, and New model approach for decision variable. The purpose of this research is to propose the statistical optimization model in constructing the portfolio of Sponsored Search Advertising (SSA) in the Sponsor's perspective through empirical tests which can be used in portfolio decision making. Previous research up to date formulates the CTR estimation model using CPC, Rank, Impression, CVR, etc., individually or collectively as the independent variables. However, many of the variables are not controllable in keyword bidding. Only CPC and Rank can be used as decision variables in the bidding system. Classical SSA model is designed on the basic assumption that the CPC is the decision variable and CTR is the response variable. However, this classical model has so many huddles in the estimation of CTR. The main problem is the uncertainty between CPC and Rank. In keyword bid, CPC is continuously fluctuating even at the same Rank. This uncertainty usually raises questions about the credibility of CTR, along with the practical management problems. Sponsors make decisions in keyword bids under the limited information, and the strategic portfolio approach based on statistical models is necessary. In order to solve the problem in Classical SSA model, the New SSA model frame is designed on the basic assumption that Rank is the decision variable. Rank is proposed as the best decision variable in predicting the CTR in many papers. Further, most of the search engine platforms provide the options and algorithms to make it possible to bid with Rank. Sponsors can participate in the keyword bidding with Rank. Therefore, this paper tries to test the validity of this new SSA model and the applicability to construct the optimal portfolio in keyword bidding. Research process is as follows; In order to perform the optimization analysis in constructing the keyword portfolio under the New SSA model, this study proposes the criteria for categorizing the keywords, selects the representing keywords for each category, shows the non-linearity relationship, screens the scenarios for CTR and CPC estimation, selects the best fit model through Goodness-of-Fit (GOF) test, formulates the optimization models, confirms the Spillover effects, and suggests the modified optimization model reflecting Spillover and some strategic recommendations. Tests of Optimization models using these CTR/CPC estimation models are empirically performed with the objective functions of (1) maximizing CTR (CTR optimization model) and of (2) maximizing expected profit reflecting CVR (namely, CVR optimization model). Both of the CTR and CVR optimization test result show that the suggested SSA model confirms the significant improvements and this model is valid in constructing the keyword portfolio using the CTR/CPC estimation models suggested in this study. However, one critical problem is found in the CVR optimization model. Important keywords are excluded from the keyword portfolio due to the myopia of the immediate low profit at present. In order to solve this problem, Markov Chain analysis is carried out and the concept of Core Transit Keyword (CTK) and Expected Opportunity Profit (EOP) are introduced. The Revised CVR Optimization model is proposed and is tested and shows validity in constructing the portfolio. Strategic guidelines and insights are as follows; Brand keywords are usually dominant in almost every aspects of CTR, CVR, the expected profit, etc. Now, it is found that the Generic keywords are the CTK and have the spillover potentials which might increase consumers awareness and lead them to Brand keyword. That's why the Generic keyword should be focused in the keyword bidding. The contribution of the thesis is to propose the novel SSA model based on Rank as decision variable, to propose to manage the keyword portfolio by categories according to the characteristics of keywords, to propose the statistical modelling and managing based on the Rank in constructing the keyword portfolio, and to perform empirical tests and propose a new strategic guidelines to focus on the CTK and to propose the modified CVR optimization objective function reflecting the spillover effect in stead of the previous expected profit models.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Development of Fertilizer-Dissolving Apparatus Using Air Pressure for Nutrient Solution Preparation and Dissolving Characteristics (공기를 이용한 양액 제조용 비료용해 장치 개발 및 용해특성)

  • Kim, Sung Eun;Kim, Young Shik
    • Journal of Bio-Environment Control
    • /
    • v.21 no.3
    • /
    • pp.163-169
    • /
    • 2012
  • We have conducted three experiments to develop a fertilizer-dissolving apparatus used in fertigation or hydroponics cultivation in order to decrease the fertilizer dissolving time and labor input via automation. All of the experiments were conducted twice. In the first experiment, four selected treatments were tested to dissolve fertilizers rapidly. The first treatment was to dissolve fertilizer by spraying water with a submerged water pump, placed in the nutrient solution tank. The water was sprayed onto fertilizer, which is dissolved and filtered through the hemp cloth mounted on the upper part of the nutrient solution tank (Spray). The second treatment was to install a propeller on the bottom of the nutrient solution tank (Propeller). The third treatment was to produce a water stream with a submerged water pump, located at the bottom of the tank (Submerged). Finally, the fourth treatment was to produce an air stream through air pipes with an air compressor located at the bottom of the tank (Airflow). The Spray treatment was found to take the shortest time to dissolve fertilizer, yet it was inconvenient to implement and manage after installation. The Airflow treatment was thought to be the best method in terms of the time to dissolve, labor input, and automation. In the second experiment, Airflow treatment was investigated in more detail. In order to determine the optimal number of air pipe arms and their specification, different versions of 6- and 8-arm air pipe systems were evaluated. The apparatus with 6 arms (Arm-6) that was made of light density polyethylene was determined to be the best system, evaluated on its time to dissolve fertilizer, easiness to use regardless of the lid size of the tank, and easiness to produce and install. In the third experiment, the Submerged and Arm-6 treatments were compared for their dissolving time and economics. Arm-6 treatment decreased the dissolving time by 8 times and proved to be very economic. In addition, dissolving characteristics were investigated for $KNO_3$, $Ca(NO_3)_2{\cdot}4H_2O$, and Fe-EDTA.

Comparison Study of Water Tension and Content Characteristics in Differently Textured Soils under Automatic Drip Irrigation (자동점적관수에 의한 토성별 수분함량 및 장력 변화특성 비교 연구)

  • Kim, Hak-Jin;Ahn, Sung-Wuk;Han, Kyung-Hwa;Choi, Jin-Yong;Chung, Sun-Ok;Roh, Mi-Young;Hur, Seung-Oh
    • Journal of Bio-Environment Control
    • /
    • v.22 no.4
    • /
    • pp.341-348
    • /
    • 2013
  • Maintenance of adequate soil tension or content during the period of crop growth is necessary to support optimum plant growth and yields. A better understanding of soil tension and content for precision irrigation would allow optimal soil water condition to crops and minimize the adverse effects of water stress on crop growth and development. This research reports on a comparison of soil water tension and content variations in differently textured soils over time under drip irrigation using two different water management methods, i.e. pulse time and required water irrigation methods. The pulse time-based irrigation was performed by turning the solenoid valve on and off for preset times to allow the wetting front to disperse in root zone before additional water was applied. The required water estimation method was a new water control logic designed by Rural Development Administration that applies the amount of water required based on a conversion of the measured water tension into water content. The use of the pulse time irrigation method under drip irrigation at a high tension of -20 kPa and high temperatures over $30^{\circ}C$ was not successful at maintaining moisture tensions within an appropriate range of 5 kPa because the preset irrigation times used for water control could not compensate for the change in evapotranspiration during day and night. The response time and pattern of water contents for all of the tested soils measured with capacitance-based sensor probes were faster and more direct than those of water tensions measured with porous and ceramic cup-based tensiometers when water was applied, indicating water content would be a better control variable for automatic irrigation. The required water estimation-based irrigation method provided relatively stable control of moisture tension, even though somewhat lower tension values were obtained as compared to the target tension of -20 kPa, indicating that growers could expect to be effective in controlling low tensions ranging from -10 to -20 kPa with the required water estimation system.

Field Survey on Smart Greenhouse (스마트 온실의 현장조사 분석)

  • Lee, Jong Goo;Jeong, Young Kyun;Yun, Sung Wook;Choi, Man Kwon;Kim, Hyeon Tae;Yoon, Yong Cheol
    • Journal of Bio-Environment Control
    • /
    • v.27 no.2
    • /
    • pp.166-172
    • /
    • 2018
  • This study set out to conduct a field survey with smart greenhouse-based farms in seven types to figure out the actual state of smart greenhouses distributed across the nation before selecting a system to implement an optimal greenhouse environment and doing a research on higher productivity based on data related to crop growth, development, and environment. The findings show that the farms were close to an intelligent or advanced smart farm, given the main purposes of leading cases across the smart farm types found in the field. As for the age of farmers, those who were in their forties and sixties accounted for the biggest percentage, but those who were in their fifties or younger ran 21 farms that accounted for approximately 70.0%. The biggest number of farmers had a cultivation career of ten years or less. As for the greenhouse type, the 1-2W type accounted for 50.0%, and the multispan type accounted for 80.0% at 24 farms. As for crops they cultivated, only three farms cultivated flowers with the remaining farms growing only fruit vegetables, of which the tomato and paprika accounted for approximately 63.6%. As for control systems, approximately 77.4% (24 farms) used a domestic control system. As for the control method of a control system, three farms regulated temperature and humidity only with a control panel with the remaining farms adopting a digital control method to combine a panel with a computer. There were total nine environmental factors to measure and control including temperature. While all the surveyed farms measured temperature, the number of farms installing a ventilation or air flow fan or measuring the concentration of carbon dioxide was relatively small. As for a heating system, 46.7% of the farms used an electric boiler. In addition, hot water boilers, heat pumps, and lamp oil boilers were used. As for investment into a control system, there was a difference in the investment scale among the farms from 10 million won to 100 million won. As for difficulties with greenhouse management, the farmers complained about difficulties with using a smart phone and digital control system due to their old age and the utter absence of education and materials about smart greenhouse management. Those difficulties were followed by high fees paid to a consultant and system malfunction in the order.

An Improved Online Algorithm to Minimize Total Error of the Imprecise Tasks with 0/1 Constraint (0/1 제약조건을 갖는 부정확한 태스크들의 총오류를 최소화시키기 위한 개선된 온라인 알고리즘)

  • Song, Gi-Hyeon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.10
    • /
    • pp.493-501
    • /
    • 2007
  • The imprecise real-time system provides flexibility in scheduling time-critical tasks. Most scheduling problems of satisfying both 0/1 constraint and timing constraints, while the total error is minimized, are NP-complete when the optional tasks have arbitrary processing times. Liu suggested a reasonable strategy of scheduling tasks with the 0/1 constraint on uniprocessors for minimizing the total error. Song et at suggested a reasonable strategy of scheduling tasks with the 0/1 constraint on multiprocessors for minimizing the total error. But, these algorithms are all off-line algorithms. In the online scheduling, the NORA algorithm can find a schedule with the minimum total error for the imprecise online task system. In NORA algorithm, EDF strategy is adopted in the optional scheduling. On the other hand, for the task system with 0/1 constraint, EDF_Scheduling may not be optimal in the sense that the total error is minimized. Furthermore, when the optional tasks are scheduled in the ascending order of their required processing times, NORA algorithm which EDF strategy is adopted may not produce minimum total error. Therefore, in this paper, an online algorithm is proposed to minimize total error for the imprecise task system with 0/1 constraint. Then, to compare the performance between the proposed algorithm and NORA algorithm, a series of experiments are performed. As a conseqence of the performance comparison between two algorithms, it has been concluded that the proposed algorithm can produce similar total error to NORA algorithm when the optional tasks are scheduled in the random order of their required processing times but, the proposed algorithm can produce less total error than NORA algorithm especially when the optional tasks are scheduled in the ascending order of their required processing times.

A Study on the Cutting Optimal Power Requirements of Fast Growing Trees by Circular Saw (원형톱에 의한 속성수 절단 적정 소요동력 산정에 관한 연구)

  • Choi, Yun Sung;Kim, Dae Hyun;Oh, Jae Heun
    • Journal of Korean Society of Forest Science
    • /
    • v.103 no.3
    • /
    • pp.402-407
    • /
    • 2014
  • In this study, Italy poplar(Populus euramericana) was selected for test specimen to measure cutting power when it harvested. The experiment has been controlled as three levels of feed rate (0.41, 1.25 and 2.5 m/s), sawing speed (800, 1,000 and 1,200 rpm), and the five levels of root collar diameter (50, 70, 90 and 110, 130 mm). The harvested volume after 3 years (root collar diameter 50 mm) was 10.5 tons, which falls short of the target amount of biomass is 20~30 ton/ha. In addition, the biomass amount of diameter 90 and 110 mm which reached the target amount were estimated to be 23.5 and 32.5 ton/ha respectively. As a result of experiment, it was found out that power of 128.2 and 175.8 W are consumed in case of cutting with the feed rate of 0.41m/s and minimum sawing speed (800 rpm) respectively. With the working area of 0.3 ha/h, it is considered to present working capacities of 16.5 and 22.8 ton/h respectively. The power consumed at the feed rate of 1.25 m/s is estimated to be 113.8 and 153.7W respectively and working capacity in a working area of 1 ha/h is estimated to be 23.5 and 32.5 ton/h. The power consumed at the feed rate of 2.5 m/s is estimated to be 119.8 and 166.9 W respectively and working capacity in a working area of 2 ha/h is estimated to be 47.0 and 65.5 ton/ha respectively. Therefore, the power source of harvest machine at the feed rate of 1.25, 2.50 m/s and sawing speed of 800 rpm shall be selected as it can process the target amount of estimated biomass.

Development of Market Growth Pattern Map Based on Growth Model and Self-organizing Map Algorithm: Focusing on ICT products (자기조직화 지도를 활용한 성장모형 기반의 시장 성장패턴 지도 구축: ICT제품을 중심으로)

  • Park, Do-Hyung;Chung, Jaekwon;Chung, Yeo Jin;Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.1-23
    • /
    • 2014
  • Market forecasting aims to estimate the sales volume of a product or service that is sold to consumers for a specific selling period. From the perspective of the enterprise, accurate market forecasting assists in determining the timing of new product introduction, product design, and establishing production plans and marketing strategies that enable a more efficient decision-making process. Moreover, accurate market forecasting enables governments to efficiently establish a national budget organization. This study aims to generate a market growth curve for ICT (information and communication technology) goods using past time series data; categorize products showing similar growth patterns; understand markets in the industry; and forecast the future outlook of such products. This study suggests the useful and meaningful process (or methodology) to identify the market growth pattern with quantitative growth model and data mining algorithm. The study employs the following methodology. At the first stage, past time series data are collected based on the target products or services of categorized industry. The data, such as the volume of sales and domestic consumption for a specific product or service, are collected from the relevant government ministry, the National Statistical Office, and other relevant government organizations. For collected data that may not be analyzed due to the lack of past data and the alteration of code names, data pre-processing work should be performed. At the second stage of this process, an optimal model for market forecasting should be selected. This model can be varied on the basis of the characteristics of each categorized industry. As this study is focused on the ICT industry, which has more frequent new technology appearances resulting in changes of the market structure, Logistic model, Gompertz model, and Bass model are selected. A hybrid model that combines different models can also be considered. The hybrid model considered for use in this study analyzes the size of the market potential through the Logistic and Gompertz models, and then the figures are used for the Bass model. The third stage of this process is to evaluate which model most accurately explains the data. In order to do this, the parameter should be estimated on the basis of the collected past time series data to generate the models' predictive value and calculate the root-mean squared error (RMSE). The model that shows the lowest average RMSE value for every product type is considered as the best model. At the fourth stage of this process, based on the estimated parameter value generated by the best model, a market growth pattern map is constructed with self-organizing map algorithm. A self-organizing map is learning with market pattern parameters for all products or services as input data, and the products or services are organized into an $N{\times}N$ map. The number of clusters increase from 2 to M, depending on the characteristics of the nodes on the map. The clusters are divided into zones, and the clusters with the ability to provide the most meaningful explanation are selected. Based on the final selection of clusters, the boundaries between the nodes are selected and, ultimately, the market growth pattern map is completed. The last step is to determine the final characteristics of the clusters as well as the market growth curve. The average of the market growth pattern parameters in the clusters is taken to be a representative figure. Using this figure, a growth curve is drawn for each cluster, and their characteristics are analyzed. Also, taking into consideration the product types in each cluster, their characteristics can be qualitatively generated. We expect that the process and system that this paper suggests can be used as a tool for forecasting demand in the ICT and other industries.

Growth and Yield of Potato after Transplanting of Potato Plug Seedlings Grown at Different Plug Cell Size and Photoperiod (다른 플러그 셀 크기와 일장에서 생산된 감자 플러그 묘의 정식 후 생육과 수량)

  • Kim, Jeong-Man;Choi, Ki-Yeung;Kim, Yeng-Hyeon;Park, Eun-Seok
    • Journal of Bio-Environment Control
    • /
    • v.17 no.1
    • /
    • pp.26-31
    • /
    • 2008
  • This experiment were conducted to investigate the response of growth and yield of potato after transplanting of plug seedlings 'Superior' and 'Dejima' produced at different plug cell size (105, 162, and 288) and photoperiod (8/16, 12/12, and 16/8, day/night) for 20 days in controlled plant growth system. Growth and relative growth rate of plug seedling 'Superior' was affected by plug cell size and photoperiod at 7weeks after transplanting. Tuber weight of 'Superior' was increased as cell size and photoperiod increased. That of 'Dejima' was highest in 105 cell and different with photoperiod. At 90 days after transplanting, tuber weight ($258.9{\sim}471.9\;g/plant) of 'Superior' was high in 105 and 162 cell size and 16/8 hr photoperiod. That ($278.2{\sim}428.0\;g/plant$) of 'Dejima' was high in 105 cell size, but was not different with photoperiod. The number of tuber per plant was $2.6{\sim}6.9$ of 'Superior' and $2.2{\sim}3.6$ 'Dejima'. Tuber number per plant was not significantly different with cell size and photoperiod. The large tuber over 80 g was $32.0{\sim}50.9%$ of 'Superior' and $41.0{\sim}56.7%$ of 'Dejima'. The large tuber in 'Superior' and 'Dejima' lowered as the cell size decreased. The large tuber of 'Superior' increased as photoperiod increased, but that of 'Dejima' was not differed. As the results, the optimal plug cell size and photoperiod of potato seedling is considered to be below 162 cell and over 12 hr of photoperiod.