• Title/Summary/Keyword: Algorithm #3

Search Result 15,289, Processing Time 0.052 seconds

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

Relationships Among Employees' IT Personnel Competency, Personal Work Satisfaction, and Personal Work Performance: A Goal Orientation Perspective (조직구성원의 정보기술 인적역량과 개인 업무만족 및 업무성과 간의 관계: 목표지향성 관점)

  • Heo, Myung-Sook;Cheon, Myun-Joong
    • Asia pacific journal of information systems
    • /
    • v.21 no.4
    • /
    • pp.63-104
    • /
    • 2011
  • The study examines the relationships among employee's goal orientation, IT personnel competency, personal effectiveness. The goal orientation includes learning goal orientation, performance approach goal orientation, and performance avoid goal orientation. Personal effectiveness consists of personal work satisfaction and personal work performance. In general, IT personnel competency refers to IT expert's skills, expertise, and knowledge required to perform IT activities in organizations. However, due to the advent of the internet and the generalization of IT, IT personnel competency turns out to be an important competency of technological experts as well as employees in organizations. While the competency of IT itself is important, the appropriate harmony between IT personnel's business capability and technological capability enhances the value of human resources and thus provides organizations with sustainable competitive advantages. The rapid pace of organization change places increased pressure on employees to continually update their skills and adapt their behavior to new organizational realities. This challenge raises a number of important questions concerning organizational behavior? Why do some employees display remarkable flexibility in their behavioral responses to changes in the organization, whereas others firmly resist change or experience great stress when faced with the need to alter behavior? Why do some employees continually strive to improve themselves over their life span, whereas others are content to forge through life using the same basic knowledge and skills? Why do some employees throw themselves enthusiastically into challenging tasks, whereas others avoid challenging tasks? The goal orientation proposed by organizational psychology provides at least a partial answer to these questions. Goal orientations refer to stable personally characteristics fostered by "self-theories" about the nature and development of attributes (such as intelligence, personality, abilities, and skills) people have. Self-theories are one's beliefs and goal orientations are achievement motivation revealed in seeking goals in accordance with one's beliefs. The goal orientations include learning goal orientation, performance approach goal orientation, and performance avoid goal orientation. Specifically, a learning goal orientation refers to a preference to develop the self by acquiring new skills, mastering new situations, and improving one's competence. A performance approach goal orientation refers to a preference to demonstrate and validate the adequacy of one's competence by seeking favorable judgments and avoiding negative judgments. A performance avoid goal orientation refers to a preference to avoid the disproving of one's competence and to avoid negative judgements about it, while focusing on performance. And the study also examines the moderating role of work career of employees to investigate the difference in the relationship between IT personnel competency and personal effectiveness. The study analyzes the collected data using PASW 18.0 and and PLS(Partial Least Square). The study also uses PLS bootstrapping algorithm (sample size: 500) to test research hypotheses. The result shows that the influences of both a learning goal orientation (${\beta}$ = 0.301, t = 3.822, P < 0.000) and a performance approach goal orientation (${\beta}$ = 0.224, t = 2.710, P < 0.01) on IT personnel competency are positively significant, while the influence of a performance avoid goal orientation(${\beta}$ = -0.142, t = 2.398, p < 0.05) on IT personnel competency is negatively significant. The result indicates that employees differ in their psychological and behavioral responses according to the goal orientation of employees. The result also shows that the impact of a IT personnel competency on both personal work satisfaction(${\beta}$ = 0.395, t = 4.897, P < 0.000) and personal work performance(${\beta}$ = 0.575, t = 12.800, P < 0.000) is positively significant. And the impact of personal work satisfaction(${\beta}$ = 0.148, t = 2.432, p < 0.05) on personal work performance is positively significant. Finally, the impacts of control variables (gender, age, type of industry, position, work career) on the relationships between IT personnel competency and personal effectiveness(personal work satisfaction work performance) are partly significant. In addition, the study uses PLS algorithm to find out a GoF(global criterion of goodness of fit) of the exploratory research model which includes a mediating variable, IT personnel competency. The result of analysis shows that the value of GoF is 0.45 above GoFlarge(0.36). Therefore, the research model turns out be good. In addition, the study performs a Sobel Test to find out the statistical significance of the mediating variable, IT personnel competency, which is already turned out to have the mediating effect in the research model using PLS. The result of a Sobel Test shows that the values of Z are all significant statistically (above 1.96 and below -1.96) and indicates that IT personnel competency plays a mediating role in the research model. At the present day, most employees are universally afraid of organizational changes and resistant to them in organizations in which the acceptance and learning of a new information technology or information system is particularly required. The problem is due' to increasing a feeling of uneasiness and uncertainty in improving past practices in accordance with new organizational changes. It is not always possible for employees with positive attitudes to perform their works suitable to organizational goals. Therefore, organizations need to identify what kinds of goal-oriented minds employees have, motivate them to do self-directed learning, and provide them with organizational environment to enhance positive aspects in their works. Thus, the study provides researchers and practitioners with a matter of primary interest in goal orientation and IT personnel competency, of which they have been unaware until very recently. Some academic and practical implications and limitations arisen in the course of the research, and suggestions for future research directions are also discussed.

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

A study on evaluation of the image with washed-out artifact after applying scatter limitation correction algorithm in PET/CT exam (PET/CT 검사에서 냉소 인공물 발생 시 산란 제한 보정 알고리즘 적용에 따른 영상 평가)

  • Ko, Hyun-Soo;Ryu, Jae-kwang
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.22 no.1
    • /
    • pp.55-66
    • /
    • 2018
  • Purpose In PET/CT exam, washed-out artifact could occur due to severe motion of the patient and high specific activity, it results in lowering not only qualitative reading but also quantitative analysis. Scatter limitation correction by GE is an algorism to correct washed-out artifact and recover the images in PET scan. The purpose of this study is to measure the threshold of specific activity which can recovers to original uptake values on the image shown with washed-out artifact from phantom experiment and to compare the quantitative analysis of the clinical patient's data before and after correction. Materials and Methods PET and CT images were acquired in having no misalignment(D0) and in 1, 2, 3, 4 cm distance of misalignment(D1, D2, D3, D4) respectively, with 20 steps of each specific activity from 20 to 20,000 kBq/ml on $^{68}Ge$ cylinder phantom. Also, we measured the distance of misalignment of foley catheter line between CT and PET images, the specific activity which makes washed-out artifact, $SUV_{mean}$ of muscle in artifact slice and $SUV_{max}$ of lesion in artifact slice and $SUV_{max}$ of the other lesion out of artifact slice before and after correction respectively from 34 patients who underwent $^{18}F-FDG$ Fusion Whole Body PET/CT exam. SPSS 21 was used to analyze the difference in the SUV between before and after scatter limitation correction by paired t-test. Results In phantom experiment, $SUV_{mean}$ of $^{68}Ge$ cylinder decreased as specific activity of $^{18}F$ increased. $SUV_{mean}$ more and more decreased as the distance of misalignment between CT and PET more increased. On the other hand, the effect of correction increased as the distance more increased. From phantom experiments, there was no washed-out artifact below 50 kBq/ml and $SUV_{mean}$ was same from origin. On D0 and D1, $SUV_{mean}$ recovered to origin(0.95) below 120 kBq/ml when applying scatter limitation correction. On D2 and D3, $SUV_{mean}$ recovered to origin below 100 kBq/ml. On D4, $SUV_{mean}$ recovered to origin below 80 kBq/ml. From 34 clinical patient's data, the average distance of misalignment was 2.02 cm and the average specific activity which makes washed-out artifact was 490.15 kBq/ml. The average $SUV_{mean}$ of muscles and the average $SUV_{max}$ of lesions in artifact slice before and after the correction show a significant difference according to a paired t-test respectively(t=-13.805, p=0.000)(t=-2.851, p=0.012), but the average $SUV_{max}$ of lesions out of artifact slice show a no significant difference (t=-1.173, p=0.250). Conclusion Scatter limitation correction algorism by GE PET/CT scanner helps to correct washed-out artifact from motion of a patient or high specific activity and to recover the PET images. When we read the image occurred with washed-out artifact by measuring the distance of misalignment between CT and PET image, specific activity after applying scatter limitation algorism, we can analyze the images more accurately without repeating scan.

Individual Thinking Style leads its Emotional Perception: Development of Web-style Design Evaluation Model and Recommendation Algorithm Depending on Consumer Regulatory Focus (사고가 시각을 바꾼다: 조절 초점에 따른 소비자 감성 기반 웹 스타일 평가 모형 및 추천 알고리즘 개발)

  • Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.171-196
    • /
    • 2018
  • With the development of the web, two-way communication and evaluation became possible and marketing paradigms shifted. In order to meet the needs of consumers, web design trends are continuously responding to consumer feedback. As the web becomes more and more important, both academics and businesses are studying consumer emotions and satisfaction on the web. However, some consumer characteristics are not well considered. Demographic characteristics such as age and sex have been studied extensively, but few studies consider psychological characteristics such as regulatory focus (i.e., emotional regulation). In this study, we analyze the effect of web style on consumer emotion. Many studies analyze the relationship between the web and regulatory focus, but most concentrate on the purpose of web use, particularly motivation and information search, rather than on web style and design. The web communicates with users through visual elements. Because the human brain is influenced by all five senses, both design factors and emotional responses are important in the web environment. Therefore, in this study, we examine the relationship between consumer emotion and satisfaction and web style and design. Previous studies have considered the effects of web layout, structure, and color on emotions. In this study, however, we excluded these web components, in contrast to earlier studies, and analyzed the relationship between consumer satisfaction and emotional indexes of web-style only. To perform this analysis, we collected consumer surveys presenting 40 web style themes to 204 consumers. Each consumer evaluated four themes. The emotional adjectives evaluated by consumers were composed of 18 contrast pairs, and the upper emotional indexes were extracted through factor analysis. The emotional indexes were 'softness,' 'modernity,' 'clearness,' and 'jam.' Hypotheses were established based on the assumption that emotional indexes have different effects on consumer satisfaction. After the analysis, hypotheses 1, 2, and 3 were accepted and hypothesis 4 was rejected. While hypothesis 4 was rejected, its effect on consumer satisfaction was negative, not positive. This means that emotional indexes such as 'softness,' 'modernity,' and 'clearness' have a positive effect on consumer satisfaction. In other words, consumers prefer emotions that are soft, emotional, natural, rounded, dynamic, modern, elaborate, unique, bright, pure, and clear. 'Jam' has a negative effect on consumer satisfaction. It means, consumer prefer the emotion which is empty, plain, and simple. Regulatory focus shows differences in motivation and propensity in various domains. It is important to consider organizational behavior and decision making according to the regulatory focus tendency, and it affects not only political, cultural, ethical judgments and behavior but also broad psychological problems. Regulatory focus also differs from emotional response. Promotion focus responds more strongly to positive emotional responses. On the other hand, prevention focus has a strong response to negative emotions. Web style is a type of service, and consumer satisfaction is affected not only by cognitive evaluation but also by emotion. This emotional response depends on whether the consumer will benefit or harm himself. Therefore, it is necessary to confirm the difference of the consumer's emotional response according to the regulatory focus which is one of the characteristics and viewpoint of the consumers about the web style. After MMR analysis result, hypothesis 5.3 was accepted, and hypothesis 5.4 was rejected. But hypothesis 5.4 supported in the opposite direction to the hypothesis. After validation, we confirmed the mechanism of emotional response according to the tendency of regulatory focus. Using the results, we developed the structure of web-style recommendation system and recommend methods through regulatory focus. We classified the regulatory focus group in to three categories that promotion, grey, prevention. Then, we suggest web-style recommend method along the group. If we further develop this study, we expect that the existing regulatory focus theory can be extended not only to the motivational part but also to the emotional behavioral response according to the regulatory focus tendency. Moreover, we believe that it is possible to recommend web-style according to regulatory focus and emotional desire which consumers most prefer.

GPR Development for Landmine Detection (지뢰탐지를 위한 GPR 시스템의 개발)

  • Sato, Motoyuki;Fujiwara, Jun;Feng, Xuan;Zhou, Zheng-Shu;Kobayashi, Takao
    • Geophysics and Geophysical Exploration
    • /
    • v.8 no.4
    • /
    • pp.270-279
    • /
    • 2005
  • Under the research project supported by Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT), we have conducted the development of GPR systems for landmine detection. Until 2005, we have finished development of two prototype GPR systems, namely ALIS (Advanced Landmine Imaging System) and SAR-GPR (Synthetic Aperture Radar-Ground Penetrating Radar). ALIS is a novel landmine detection sensor system combined with a metal detector and GPR. This is a hand-held equipment, which has a sensor position tracking system, and can visualize the sensor output in real time. In order to achieve the sensor tracking system, ALIS needs only one CCD camera attached on the sensor handle. The CCD image is superimposed with the GPR and metal detector signal, and the detection and identification of buried targets is quite easy and reliable. Field evaluation test of ALIS was conducted in December 2004 in Afghanistan, and we demonstrated that it can detect buried antipersonnel landmines, and can also discriminate metal fragments from landmines. SAR-GPR (Synthetic Aperture Radar-Ground Penetrating Radar) is a machine mounted sensor system composed of B GPR and a metal detector. The GPR employs an array antenna for advanced signal processing for better subsurface imaging. SAR-GPR combined with synthetic aperture radar algorithm, can suppress clutter and can image buried objects in strongly inhomogeneous material. SAR-GPR is a stepped frequency radar system, whose RF component is a newly developed compact vector network analyzers. The size of the system is 30cm x 30cm x 30 cm, composed from six Vivaldi antennas and three vector network analyzers. The weight of the system is 17 kg, and it can be mounted on a robotic arm on a small unmanned vehicle. The field test of this system was carried out in March 2005 in Japan.

L-band SAR-derived Sea Surface Wind Retrieval off the East Coast of Korea and Error Characteristics (L밴드 인공위성 SAR를 이용한 동해 연안 해상풍 산출 및 오차 특성)

  • Kim, Tae-Sung;Park, Kyung-Ae;Choi, Won-Moon;Hong, Sungwook;Choi, Byoung-Cheol;Shin, Inchul;Kim, Kyung-Ryul
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.5
    • /
    • pp.477-487
    • /
    • 2012
  • Sea surface winds in the sea off the east coast of Korea were derived from L-band ALOS (Advanced Land Observing Satellite) PALSAR (Phased Array type L-band Synthetic Aperture Radar) data and their characteristics of errors were analyzed. We could retrieve high-resolution wind vectors off the east coast of Korea including the coastal region, which has been substantially unavailable from satellite scatterometers. Retrieved SAR-wind speeds showed a good agreement with in-situ buoy measurement by showing relatively small an root-mean-square (RMS) error of 0.67 m/s. Comparisons of the wind vectors from SAR and scatterometer presented RMS errors of 2.16 m/s and $19.24^{\circ}$, 3.62 m/s and $28.02^{\circ}$ for L-band GMF (Geophysical Model Function) algorithm 2009 and 2007, respectively, which tended to be somewhat higher than the expected limit of satellite scatterometer winds errors. L-band SAR-derived wind field exhibited the characteristic dependence on wind direction and incidence angle. The previous version (L-band GMF 2007) revealed large errors at small incidence angles of less than $21^{\circ}$. By contrast, the L-band GMF 2009, which improved the effect of incidence angle on the model function by considering a quadratic function instead of a linear relationship, greatly enhanced the quality of wind speed from 6.80 m/s to 1.14 m/s at small incident angles. This study addressed that the causes of wind retrieval errors should be intensively studied for diverse applications of L-band SAR-derived winds, especially in terms of the effects of wind direction and incidence angle, and other potential error sources.

A Development of Traffic Queue Length Measuring Algorithm Using ILD(Inductive Loop Detector) Based on COSMOS (실시간 신호제어시스템의 대기길이 추정 알고리즘 개발)

  • seong ki-ju;Lee choul-ki;Jeong Jun-ha;Lee young-in;Park dae-hyun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.3 no.1 s.4
    • /
    • pp.85-96
    • /
    • 2004
  • The study begin with a basic concept, if the occupancy length of vehicle detector is directly proportional to the delay of vehicle. That is, it analogize vehicle's delay of a occupancy time. The results of a study was far superior in the estimation of a queue length. It is a very good points the operator is not necessary to optimize s1, s2, Thdoc. Thdoc(critical congestion degree) replaced 0.7 with 0.2 - 0.3. But, if vehicles have been experience in delay was not occupy vehicle detector, the study is in existence some problems. In conclusion, it is necessary that stretch queue detector or install paired queue detector. Also I want to be made steady progress a following study relation to this study, because it is required traffic signal control on congestion.

  • PDF

Latent topics-based product reputation mining (잠재 토픽 기반의 제품 평판 마이닝)

  • Park, Sang-Min;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.39-70
    • /
    • 2017
  • Data-drive analytics techniques have been recently applied to public surveys. Instead of simply gathering survey results or expert opinions to research the preference for a recently launched product, enterprises need a way to collect and analyze various types of online data and then accurately figure out customer preferences. In the main concept of existing data-based survey methods, the sentiment lexicon for a particular domain is first constructed by domain experts who usually judge the positive, neutral, or negative meanings of the frequently used words from the collected text documents. In order to research the preference for a particular product, the existing approach collects (1) review posts, which are related to the product, from several product review web sites; (2) extracts sentences (or phrases) in the collection after the pre-processing step such as stemming and removal of stop words is performed; (3) classifies the polarity (either positive or negative sense) of each sentence (or phrase) based on the sentiment lexicon; and (4) estimates the positive and negative ratios of the product by dividing the total numbers of the positive and negative sentences (or phrases) by the total number of the sentences (or phrases) in the collection. Furthermore, the existing approach automatically finds important sentences (or phrases) including the positive and negative meaning to/against the product. As a motivated example, given a product like Sonata made by Hyundai Motors, customers often want to see the summary note including what positive points are in the 'car design' aspect as well as what negative points are in thesame aspect. They also want to gain more useful information regarding other aspects such as 'car quality', 'car performance', and 'car service.' Such an information will enable customers to make good choice when they attempt to purchase brand-new vehicles. In addition, automobile makers will be able to figure out the preference and positive/negative points for new models on market. In the near future, the weak points of the models will be improved by the sentiment analysis. For this, the existing approach computes the sentiment score of each sentence (or phrase) and then selects top-k sentences (or phrases) with the highest positive and negative scores. However, the existing approach has several shortcomings and is limited to apply to real applications. The main disadvantages of the existing approach is as follows: (1) The main aspects (e.g., car design, quality, performance, and service) to a product (e.g., Hyundai Sonata) are not considered. Through the sentiment analysis without considering aspects, as a result, the summary note including the positive and negative ratios of the product and top-k sentences (or phrases) with the highest sentiment scores in the entire corpus is just reported to customers and car makers. This approach is not enough and main aspects of the target product need to be considered in the sentiment analysis. (2) In general, since the same word has different meanings across different domains, the sentiment lexicon which is proper to each domain needs to be constructed. The efficient way to construct the sentiment lexicon per domain is required because the sentiment lexicon construction is labor intensive and time consuming. To address the above problems, in this article, we propose a novel product reputation mining algorithm that (1) extracts topics hidden in review documents written by customers; (2) mines main aspects based on the extracted topics; (3) measures the positive and negative ratios of the product using the aspects; and (4) presents the digest in which a few important sentences with the positive and negative meanings are listed in each aspect. Unlike the existing approach, using hidden topics makes experts construct the sentimental lexicon easily and quickly. Furthermore, reinforcing topic semantics, we can improve the accuracy of the product reputation mining algorithms more largely than that of the existing approach. In the experiments, we collected large review documents to the domestic vehicles such as K5, SM5, and Avante; measured the positive and negative ratios of the three cars; showed top-k positive and negative summaries per aspect; and conducted statistical analysis. Our experimental results clearly show the effectiveness of the proposed method, compared with the existing method.