• Title/Summary/Keyword: Input Data

Search Result 8,338, Processing Time 0.04 seconds

Large-Scale Current Source Development in Nuclear Power Plant (원전에 사용되는 직류전압제어 대전류원의 개발)

  • Jong-ho Kim;Gyu-shik Che
    • Journal of Advanced Navigation Technology
    • /
    • v.28 no.3
    • /
    • pp.348-355
    • /
    • 2024
  • A current source capable of stably supplying current as a measurement medium is required in order to measure and test important facilities that require large-scale measurement current, such as a control element drive mechanism control system(CEDMCS), in case of dismantling a nuclear power plant. However, it can provides only voltage power as a source, not current, although direct voltage controlled constant current source is essential to test major equipment. That kind of source is not available to supply stable constant current regardless of load variation. It is just voltage supplier. Developing current source is not easy other than voltage source. Very large-scale current source up to ampere class more than such ten times of normal current is inevitable to test above mentioned equipment. So, we developed large-scale current source which is controlled by input DC voltage and supplies constant stable current to object equipment according to this requirement. We measured and tested nuclear power plant equipment using given real site data for a long time and afforded long period load test, and then proved its validity and verification. The developed invetion will be used future installed important equipment measuring and testing.

Combined analysis of meteorological and hydrological drought for hydrological drought prediction and early response - Focussing on the 2022-23 drought in the Jeollanam-do - (수문학적 가뭄 예측과 조기대응을 위한 기상-수문학적 가뭄의 연계분석 - 2022~23 전남지역 가뭄을 대상으로)

  • Jeong, Minsu;Hong, Seok-Jae;Kim, Young-Jun;Yoon, Hyeon-Cheol;Lee, Joo-Heon
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.195-207
    • /
    • 2024
  • This study selected major drought events that occurred in the Jeonnam region from 1991 to 2023, examining both meteorological and hydrological drought occurrence mechanisms. The daily drought index was calculated using rainfall and dam storage as input data, and the drought propagation characteristics from meteorological drought to hydrological drought were analyzed. The characteristics of the 2022-23 drought, which recently occurred in the Jeonnam region and caused serious damage, were evaluated. Compared to historical droughts, the duration of the hydrological drought for 2022-2023 lasted 334 days, the second longest after 2017-2018, the drought severity was evaluated as the most severe at -1.76. As a result of a linked analysis of SPI (StandQardized Precipitation Index), and SRSI (Standardized Reservoir Storage Index), it is possible to suggest a proactive utilization for SPI(6) to respond to hydrological drought. Furthermore, by confirming the similarity between SRSI and SPI(12) in long-term drought monitoring, the applicability of SPI(12) to hydrological drought monitoring in ungauged basins was also confirmed. Through this study, it was confirmed that the long-term dryness that occurs during the summer rainy season can transition into a serious level of hydrological drought. Therefore, for preemptive drought response, it is necessary to use real-time monitoring results of various drought indices and understand the propagation phenomenon from meteorological-agricultural-hydrological drought to secure a sufficient drought response period.

Nondestructive Quantification of Corrosion in Cu Interconnects Using Smith Charts (스미스 차트를 이용한 구리 인터커텍트의 비파괴적 부식도 평가)

  • Minkyu Kang;Namgyeong Kim;Hyunwoo Nam;Tae Yeob Kang
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.31 no.2
    • /
    • pp.28-35
    • /
    • 2024
  • Corrosion inside electronic packages significantly impacts the system performance and reliability, necessitating non-destructive diagnostic techniques for system health management. This study aims to present a non-destructive method for assessing corrosion in copper interconnects using the Smith chart, a tool that integrates the magnitude and phase of complex impedance for visualization. For the experiment, specimens simulating copper transmission lines were subjected to temperature and humidity cycles according to the MIL-STD-810G standard to induce corrosion. The corrosion level of the specimen was quantitatively assessed and labeled based on color changes in the R channel. S-parameters and Smith charts with progressing corrosion stages showed unique patterns corresponding to five levels of corrosion, confirming the effectiveness of the Smith chart as a tool for corrosion assessment. Furthermore, by employing data augmentation, 4,444 Smith charts representing various corrosion levels were obtained, and artificial intelligence models were trained to output the corrosion stages of copper interconnects based on the input Smith charts. Among image classification-specialized CNN and Transformer models, the ConvNeXt model achieved the highest diagnostic performance with an accuracy of 89.4%. When diagnosing the corrosion using the Smith chart, it is possible to perform a non-destructive evaluation using electronic signals. Additionally, by integrating and visualizing signal magnitude and phase information, it is expected to perform an intuitive and noise-robust diagnosis.

A Study on CO2 injectivity with Nodal Analysis in Depleted Oil Reservoirs (고갈 유전 저류층에서 노달분석을 이용한 CO2 주입성 분석 연구)

  • Yu-Bin An;Jea-Yun Kim;Sun-il Kwon
    • Journal of the Korean Institute of Gas
    • /
    • v.28 no.2
    • /
    • pp.66-75
    • /
    • 2024
  • This paper presents development of a CO2 injectivity analysis model using nodal analysis for the depleted oil reservoirs in Malaysia. Based on the final well report of an appraisal well, a basic model was established, and sensitivity analysis was performed on injection pressure, tubing size, reservoir pressure, reservoir permeability, and thickness. Utilizing the well testing report of A appraisal well, permeability of 10md was determined through production nodal analysis. Using the basic input data from the A appraisal well, an injection well model was set. Nodal analysis of the basic model, at the bottomhole pressure of 3000.74psia, estimated the CO2 injection rate to be 13.29MMscfd. As the results of sensitivity analysis, the injection pressure, reservoir thickness, and permeability tend to exhibit a roughly linear increase in injection rate when they were higher, while a decrease in reservoir pressure at injection also resulted in an approximate linear increase in injection rate. Analyzing the injection rate per inch of tubing size, the optimal tubing size of 2.548inch was determined. It is recommended that if the formation parting pressure is known, performing nodal analysis can predict the maximum reservoir pressure and injection pressure by comparing with bottomhole pressure.

Development of Bond Strength Model for FRP Plates Using Back-Propagation Algorithm (역전파 학습 알고리즘을 이용한 콘크리트와 부착된 FRP 판의 부착강도 모델 개발)

  • Park, Do-Kyong
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.10 no.2
    • /
    • pp.133-144
    • /
    • 2006
  • In order to catch out such Bond Strength, the preceding researchers had ever examined the Bond Strength of FRP Plate through their experimentations by setting up of various fluent. However, since the experiment for research on such Bond Strength takes much of expenditure for equipment structure and time-consuming, also difficult to carry out, it is conducting limitedly. This Study purposes to develop the most suitable Artificial Neural Network Model by application of various Neural Network Model and Algorithm to the adhering experiment data of the preceding researchers. Output Layer of Artificial Neural Network Model, and Input Layer of Bond Strength were performed the learning by selection as the variable of the thickness, width, adhered length, the modulus of elasticity, tensile strength, and the compressive strength of concrete, tensile strength, width, respectively. The developed Artificial Neural Network Model has applied Back-Propagation, and its error was learnt to be converged within the range of 0.001. Besides, the process for generalization has dissolved the problem of Over-Fitting in the way of more generalized method by introduction of Bayesian Technique. The verification on the developed Model was executed by comparison with the resulted value of Bond Strength made by the other preceding researchers which was never been utilized to the learning as yet.

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

The Effect of Structured Information on the Sleep Amount of Patients Undergoing Open Heart Surgery (계획된 간호 정보가 수면량에 미치는 영향에 관한 연구 -개심술 환자를 중심으로-)

  • 이소우
    • Journal of Korean Academy of Nursing
    • /
    • v.12 no.2
    • /
    • pp.1-26
    • /
    • 1982
  • The main purpose of this study was to test the effect of the structured information on the sleep amount of the patients undergoing open heart surgery. This study has specifically addressed to the Following two basic research questions: (1) Would the structed in formation influence in the reduction of sleep disturbance related to anxiety and Physical stress before and after the operation? and (2) that would be the effects of the structured information on the level of preoperative state anxiety, the hormonal change, and the degree of behavioral change in the patients undergoing an open heart surgery? A Quasi-experimental research was designed to answer these questions with one experimental group and one control group. Subjects in both groups were matched as closely as possible to avoid the effect of the differences inherent to the group characteristics, Baseline data were also. collected on both groups for 7 days prior to the experiment and found that subjects in both groups had comparable sleep patterns, trait anxiety, hormonal levels and behavioral level. A structured information as an experimental input was given to the subjects in the experimental group only. Data were collected and compared between the experimental group and the control group on the sleep amount of the consecutive pre and post operative days, on preoperative state anxiety level, and on hormonal and behavioral changes. To test the effectiveness of the structured information, two main hypotheses and three sub-hypotheses were formulated as follows; Main hypothesis 1: Experimental group which received structured information will have more sleep amount than control group without structured information in the night before the open heart surgery. Main hypothesis 2: Experimental group with structured information will have more sleep, amount than control group without structured information during the week following the open heart surgery Sub-hypothesis 1: Experimental group with structured information will be lower in the level of State anxiety than control group without structured information in the night before the open heart surgery. Sub-hypothesis 2 : Experimental group with structured information will have lower hormonal level than control group without stuctured information on the 5th day after the open heart surgery Sub-hypothesis 3: Experimental group with structured information will be lower in the behavioral change level than control group without structured information during the week after the open heart surgery. The research was conducted in a national university hospital in Seoul, Korea. The 53 Subjects who participated in the study were systematically divided into experimental group and control group which was decided by random sampling method. Among 53 subjects, 26 were placed in the experimental group and 27 in the control group. Instruments; (1) Structed information: Structured information as an independent variable was constructed by the researcher on the basis of Roy's adaptation model consisting of physiologic needs, self-concept, role function and interdependence needs as related to the sleep and of operational procedures. (2) Sleep amount measure: Sleep amount as main dependent variable was measured by trained nurses through observation on the basis of the established criteria, such as closed or open eyes, regular or irregular respiration, body movement, posture, responses to the light and question, facial expressions and self report after sleep. (3) State anxiety measure: State Anxiety as a sub-dependent variable was measured by Spi-elberger's STAI Anxiety scale, (4) Hormornal change measure: Hormone as a sub-dependent variable was measured by the cortisol level in plasma. (5) Behavior change measure: Behavior as a sub-dependent variable was measured by the Behavior and Mood Rating Scale by Wyatt. The data were collected over a period of four months, from June to October 1981, after the pretest period of two months. For the analysis of the data and test for the hypotheses, the t-test with mean differences and analysis of covariance was used. The result of the test for instruments show as follows: (1) STAI measurement for trait and state anxiety as analyzed by Cronbachs alpha coefficient analysis for item analysis and reliability showed the reliability level at r= .90 r= .91 respectively. (2) Behavior and Mood Rating Scale measurement was analyzed by means of Principal Component Analysis technique. Seven factors retained were anger, anxiety, hyperactivity, depression, bizarre behavior, suspicious behavior and emotional withdrawal. Cumulative percentage of each factor was 71.3%. The result of the test for hypotheses show as follows; (1) Main hypothesis, was not supported. The experimental group has 282 minutes of sleep as compared to the 255 minutes of sleep by the control group. Thus the sleep amount was higher in experimental group than in control group, however, the difference was not statistically significant at .05 level. (2) Main hypothesis 2 was not supported. The mean sleep amount of the experimental group and control group were 297 minutes and 278 minutes respectively Therefore, the experimental group had more sleep amount as compared to the control group, however, the difference was not statistically significant at .05 level. Thus, the main hypothesis 2 was not supported. (3) Sub-hypothesis 1 was not supported. The mean state anxiety of the experimental group and control group were 42.3, 43.9 in scores. Thus, the experimental group had slightly lower state anxiety level than control group, howe-ver, the difference was not statistically significant at .05 level. (4) Sub-hypothesis 2 was not supported. . The mean hormonal level of the experimental group and control group were 338 ㎍ and 440 ㎍ respectively. Thus, the experimental group showed decreased hormonal level than the control group, however, the difference was not statistically significant at .05 level. (5) Sub-hypothesis 3 was supported. The mean behavioral level of the experimental group and control group were 29.60 and 32.00 respectively in score. Thus, the experimental group showed lower behavioral change level than the control group. The difference was statistically significant at .05 level. In summary, the structured information did not influence the sleep amount, state anxiety or hormonal level of the subjects undergoing an open heart surgery at a statistically significant level, however, it showed a definite trends in their relationships, not least to mention its significant effect shown on behavioral change level. It can further be speculated that a great degree of individual differences in the variables such as sleep amount, state anxiety and fluctuation in hormonal level may partly be responsible for the statistical insensitivity to the experimentation.

  • PDF

Case Analysis of the Promotion Methodologies in the Smart Exhibition Environment (스마트 전시 환경에서 프로모션 적용 사례 및 분석)

  • Moon, Hyun Sil;Kim, Nam Hee;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.171-183
    • /
    • 2012
  • In the development of technologies, the exhibition industry has received much attention from governments and companies as an important way of marketing activities. Also, the exhibitors have considered the exhibition as new channels of marketing activities. However, the growing size of exhibitions for net square feet and the number of visitors naturally creates the competitive environment for them. Therefore, to make use of the effective marketing tools in these environments, they have planned and implemented many promotion technics. Especially, through smart environment which makes them provide real-time information for visitors, they can implement various kinds of promotion. However, promotions ignoring visitors' various needs and preferences can lose the original purposes and functions of them. That is, as indiscriminate promotions make visitors feel like spam, they can't achieve their purposes. Therefore, they need an approach using STP strategy which segments visitors through right evidences (Segmentation), selects the target visitors (Targeting), and give proper services to them (Positioning). For using STP Strategy in the smart exhibition environment, we consider these characteristics of it. First, an exhibition is defined as market events of a specific duration, which are held at intervals. According to this, exhibitors who plan some promotions should different events and promotions in each exhibition. Therefore, when they adopt traditional STP strategies, a system can provide services using insufficient information and of existing visitors, and should guarantee the performance of it. Second, to segment automatically, cluster analysis which is generally used as data mining technology can be adopted. In the smart exhibition environment, information of visitors can be acquired in real-time. At the same time, services using this information should be also provided in real-time. However, many clustering algorithms have scalability problem which they hardly work on a large database and require for domain knowledge to determine input parameters. Therefore, through selecting a suitable methodology and fitting, it should provide real-time services. Finally, it is needed to make use of data in the smart exhibition environment. As there are useful data such as booth visit records and participation records for events, the STP strategy for the smart exhibition is based on not only demographical segmentation but also behavioral segmentation. Therefore, in this study, we analyze a case of the promotion methodology which exhibitors can provide a differentiated service to segmented visitors in the smart exhibition environment. First, considering characteristics of the smart exhibition environment, we draw evidences of segmentation and fit the clustering methodology for providing real-time services. There are many studies for classify visitors, but we adopt a segmentation methodology based on visitors' behavioral traits. Through the direct observation, Veron and Levasseur classify visitors into four groups to liken visitors' traits to animals (Butterfly, fish, grasshopper, and ant). Especially, because variables of their classification like the number of visits and the average time of a visit can estimate in the smart exhibition environment, it can provide theoretical and practical background for our system. Next, we construct a pilot system which automatically selects suitable visitors along the objectives of promotions and instantly provide promotion messages to them. That is, based on the segmentation of our methodology, our system automatically selects suitable visitors along the characteristics of promotions. We adopt this system to real exhibition environment, and analyze data from results of adaptation. As a result, as we classify visitors into four types through their behavioral pattern in the exhibition, we provide some insights for researchers who build the smart exhibition environment and can gain promotion strategies fitting each cluster. First, visitors of ANT type show high response rate for promotion messages except experience promotion. So they are fascinated by actual profits in exhibition area, and dislike promotions requiring a long time. Contrastively, visitors of GRASSHOPPER type show high response rate only for experience promotion. Second, visitors of FISH type appear favors to coupon and contents promotions. That is, although they don't look in detail, they prefer to obtain further information such as brochure. Especially, exhibitors that want to give much information for limited time should give attention to visitors of this type. Consequently, these promotion strategies are expected to give exhibitors some insights when they plan and organize their activities, and grow the performance of them.

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.