• 제목/요약/키워드: Mining method

검색결과 2,069건 처리시간 0.026초

Empirical correlation for in-situ deformation modulus of sedimentary rock slope mass and support system recommendation using the Qslope method

  • Yimin Mao;Mohammad Azarafza;Masoud Hajialilue Bonab;Marc Bascompta;Yaser A. Nanehkaran
    • Geomechanics and Engineering
    • /
    • 제35권5호
    • /
    • pp.539-554
    • /
    • 2023
  • This article is dedicated to the pursuit of establishing a robust empirical relationship that allows for the estimation of in-situ modulus of deformations (Em and Gm) within sedimentary rock slope masses through the utilization of Qslope values. To achieve this significant objective, an expansive and thorough methodology is employed, encompassing a comprehensive field survey, meticulous sample collection, and rigorous laboratory testing. The study sources a total of 26 specimens from five distinct locations within the South Pars (known as Assalouyeh) region, ensuring a representative dataset for robust correlations. The results of this extensive analysis reveal compelling empirical connections between Em, geomechanical characteristics of the rock mass, and the calculated Qslope values. Specifically, these relationships are expressed as follows: Em = 2.859 Qslope + 4.628 (R2 = 0.554), and Gm = 1.856 Qslope + 3.008 (R2 = 0.524). Moreover, the study unravels intriguing insights into the interplay between in-situ deformation moduli and the widely utilized Rock Mass Rating (RMR) computations, leading to the formulation of equations that facilitate predictions: RMR = 18.12 Em0.460 (R2 = 0.798) and RMR = 22.09 Gm0.460 (R2 = 0.766). Beyond these correlations, the study delves into the intricate relationship between RMR and Rock Quality Designation (RQD) with Qslope values. The findings elucidate the following relationships: RMR = 34.05e0.33Qslope (R2 = 0.712) and RQD = 31.42e0.549Qslope (R2 = 0.902). Furthermore, leveraging the insights garnered from this comprehensive analysis, the study offers an empirically derived support system tailored to the distinct characteristics of discontinuous rock slopes, grounded firmly within the framework of the Qslope methodology. This holistic approach contributes significantly to advancing the understanding of sedimentary rock slope stability and provides valuable tools for informed engineering decisions.

Recent Domestic Research Trend Over Startups: Focusing on the Social Network Analysis of Research Variables (스타트업 관련 최근 국내 연구 동향: 연구 변수들에 대한 소셜 네트워크 분석을 중심으로)

  • Kil, ChangMin;Yang, DongWoo
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • 제17권2호
    • /
    • pp.81-97
    • /
    • 2022
  • This paper's purpose is to get hold of the recent research trend by analyzing the variables uesd in startups related papers. The startups related papers in this paper are the papers which include 'startups' in the title of the registered papers from the year 2013 to the year 2020. This study's analysis methods are text-mining of all variables and text-network analysis of affected variables. Visualizing tool for network analysis is Gephi. The result of variables' analysis is as follows. First, independent variables consist mainly of variables about startups' internal factors and outside environment, but due to startups' features like early stage company's features, innovative features, most of variables are about enterprise internal competitiveness, marketing 4P strategy, entrepreneurship, coopreation method, transformational leadership, enterprise features, lean startup strategy, enterprise internal communication, value orientation, task conflict, relationship conflict, knowledge sharing, etc. Second, dependent variables are mainly about outcome, and are classified into financial performance and non-financial performance by overall concept. In other words, startups related papers have higher interest in non-financial performance, like management performance, team performance, SCM performance as well as financial performance like sales quantity owing to startups' immaturity in getting good financial performance. Through this study we can find out as follows. Although there are not many officially registered papers dealing with startups, those papers include various themes about stratups. For example, there are trendy themes like lean startups strategy, crowdfunding, influencer and accelerator, etc.

Analysis of Dog-Related Outdoor Public Space Conflicts Using Complaint Data (민원 자료를 활용한 반려견 관련 옥외 공공공간 갈등 분석)

  • Yoo, Ye-seul;Son, Yong-Hoon;Zoh, Kyung-Jin
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • 제52권1호
    • /
    • pp.34-45
    • /
    • 2024
  • Companion animals are increasingly being recognized as members of society in outdoor public spaces. However, the presence of dogs in cities has become a subject of conflict between pet owners and non-pet owners, causing problems in terms of hygiene and noise. This study was conducted to analyze public complaint data using the keywords 'dog,' 'pet,' and 'puppy' through text mining techniques to identify the causes of conflicts in outdoor public spaces related to dogs and to identify key issues. The main findings of the study are as follows. First, the majority of dog-related complaints were related to the use of outdoor public spaces. Second, different types of outdoor public spaces have different spatial issues. Third, there were a total of four topics of dog-related complaints: 'Requesting a dog playground', 'Raising safety issues related to animals', 'Using facilities other than dog-only areas', and 'Requesting increased park management and enforcement related to pet tickets'. This study analyzed the perceptions of citizens surrounding pets at a time when the creation and use of public spaces related to pets are expanding. In particular, it is significant in that it applied a new method of collecting public opinions by adopting complaint data that clearly presents problems and requests.

LDA Topic Modeling and Recommendation of Similar Patent Document Using Word2vec (LDA 토픽 모델링과 Word2vec을 활용한 유사 특허문서 추천연구)

  • Apgil Lee;Keunho Choi;Gunwoo Kim
    • Information Systems Review
    • /
    • 제22권1호
    • /
    • pp.17-31
    • /
    • 2020
  • With the start of the fourth industrial revolution era, technologies of various fields are merged and new types of technologies and products are being developed. In addition, the importance of the registration of intellectual property rights and patent registration to gain market dominance of them is increasing in oversea as well as in domestic. Accordingly, the number of patents to be processed per examiner is increasing every year, so time and cost for prior art research are increasing. Therefore, a number of researches have been carried out to reduce examination time and cost for patent-pending technology. This paper proposes a method to calculate the degree of similarity among patent documents of the same priority claim when a plurality of patent rights priority claims are filed and to provide them to the examiner and the patent applicant. To this end, we preprocessed the data of the existing irregular patent documents, used Word2vec to obtain similarity between patent documents, and then proposed recommendation model that recommends a similar patent document in descending order of score. This makes it possible to promptly refer to the examination history of patent documents judged to be similar at the time of examination by the examiner, thereby reducing the burden of work and enabling efficient search in the applicant's prior art research. We expect it will contribute greatly.

A Study on the Perception of Pit and Fissure Sealant using Unstructured Big Data (비정형 빅데이터를 이용한 치면열구전색(치아홈메우기)에 대한 인식분석)

  • Han-A Cho
    • Journal of Korean Dental Hygiene Science
    • /
    • 제6권2호
    • /
    • pp.101-114
    • /
    • 2023
  • Background: This study aimed to explore the overall perception of pit and fissure sealants and suggest methods to revitalize their current stagnation. Methods: To determine the social perception of the change in coverage policy for pit and fissure sealants, we categorized them into five time periods. The first period (December 1, 2009 to November 30, 2010), the second period (December 1, 2010 to September 30, 2012), the third period (October 1, 2012 to May 5, 2013), the fourth period (May 6, 2013 to September 30, 2017), and the fifth period (October 1, 2017 to December 31, 2022). We utilized text mining, an unstructured big data analysis method. Keywords were collected and analyzed using Textom, and the frequency analysis of the top 30 keywords, structural features of the semantic network, centrality analysis, QAP correlation analysis, and co-occurrence analysis were conducted. Results: The frequency analysis showed that the top keywords for each time period were 'Cavities', 'Treatment', and 'Children'. In the structural features of the semantic network of pit and fissure sealants by time period, the density index was found to be around 1.00 for all time periods. The QAP correlation analysis showed the highest correlation between the first and second periods and the fourth and fifth periods with a correlation coefficient of 0.834. The co-occurrence analysis showed that 'cavities' and 'prevention were the top two words across all time periods. Conclusion: This study showed that pit and fissure sealants are well accepted by the society as a preventive treatment for caries. However, the awareness of health education related to these sealants was found to be low. Efforts to revitalize stagnant pit and fissure sealants need to be strengthened with effective education.

The Effect of Meta-Features of Multiclass Datasets on the Performance of Classification Algorithms (다중 클래스 데이터셋의 메타특징이 판별 알고리즘의 성능에 미치는 영향 연구)

  • Kim, Jeonghun;Kim, Min Yong;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • 제26권1호
    • /
    • pp.23-45
    • /
    • 2020
  • Big data is creating in a wide variety of fields such as medical care, manufacturing, logistics, sales site, SNS, and the dataset characteristics are also diverse. In order to secure the competitiveness of companies, it is necessary to improve decision-making capacity using a classification algorithm. However, most of them do not have sufficient knowledge on what kind of classification algorithm is appropriate for a specific problem area. In other words, determining which classification algorithm is appropriate depending on the characteristics of the dataset was has been a task that required expertise and effort. This is because the relationship between the characteristics of datasets (called meta-features) and the performance of classification algorithms has not been fully understood. Moreover, there has been little research on meta-features reflecting the characteristics of multi-class. Therefore, the purpose of this study is to empirically analyze whether meta-features of multi-class datasets have a significant effect on the performance of classification algorithms. In this study, meta-features of multi-class datasets were identified into two factors, (the data structure and the data complexity,) and seven representative meta-features were selected. Among those, we included the Herfindahl-Hirschman Index (HHI), originally a market concentration measurement index, in the meta-features to replace IR(Imbalanced Ratio). Also, we developed a new index called Reverse ReLU Silhouette Score into the meta-feature set. Among the UCI Machine Learning Repository data, six representative datasets (Balance Scale, PageBlocks, Car Evaluation, User Knowledge-Modeling, Wine Quality(red), Contraceptive Method Choice) were selected. The class of each dataset was classified by using the classification algorithms (KNN, Logistic Regression, Nave Bayes, Random Forest, and SVM) selected in the study. For each dataset, we applied 10-fold cross validation method. 10% to 100% oversampling method is applied for each fold and meta-features of the dataset is measured. The meta-features selected are HHI, Number of Classes, Number of Features, Entropy, Reverse ReLU Silhouette Score, Nonlinearity of Linear Classifier, Hub Score. F1-score was selected as the dependent variable. As a result, the results of this study showed that the six meta-features including Reverse ReLU Silhouette Score and HHI proposed in this study have a significant effect on the classification performance. (1) The meta-features HHI proposed in this study was significant in the classification performance. (2) The number of variables has a significant effect on the classification performance, unlike the number of classes, but it has a positive effect. (3) The number of classes has a negative effect on the performance of classification. (4) Entropy has a significant effect on the performance of classification. (5) The Reverse ReLU Silhouette Score also significantly affects the classification performance at a significant level of 0.01. (6) The nonlinearity of linear classifiers has a significant negative effect on classification performance. In addition, the results of the analysis by the classification algorithms were also consistent. In the regression analysis by classification algorithm, Naïve Bayes algorithm does not have a significant effect on the number of variables unlike other classification algorithms. This study has two theoretical contributions: (1) two new meta-features (HHI, Reverse ReLU Silhouette score) was proved to be significant. (2) The effects of data characteristics on the performance of classification were investigated using meta-features. The practical contribution points (1) can be utilized in the development of classification algorithm recommendation system according to the characteristics of datasets. (2) Many data scientists are often testing by adjusting the parameters of the algorithm to find the optimal algorithm for the situation because the characteristics of the data are different. In this process, excessive waste of resources occurs due to hardware, cost, time, and manpower. This study is expected to be useful for machine learning, data mining researchers, practitioners, and machine learning-based system developers. The composition of this study consists of introduction, related research, research model, experiment, conclusion and discussion.

Selection Model of System Trading Strategies using SVM (SVM을 이용한 시스템트레이딩전략의 선택모형)

  • Park, Sungcheol;Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • 제20권2호
    • /
    • pp.59-71
    • /
    • 2014
  • System trading is becoming more popular among Korean traders recently. System traders use automatic order systems based on the system generated buy and sell signals. These signals are generated from the predetermined entry and exit rules that were coded by system traders. Most researches on system trading have focused on designing profitable entry and exit rules using technical indicators. However, market conditions, strategy characteristics, and money management also have influences on the profitability of the system trading. Unexpected price deviations from the predetermined trading rules can incur large losses to system traders. Therefore, most professional traders use strategy portfolios rather than only one strategy. Building a good strategy portfolio is important because trading performance depends on strategy portfolios. Despite of the importance of designing strategy portfolio, rule of thumb methods have been used to select trading strategies. In this study, we propose a SVM-based strategy portfolio management system. SVM were introduced by Vapnik and is known to be effective for data mining area. It can build good portfolios within a very short period of time. Since SVM minimizes structural risks, it is best suitable for the futures trading market in which prices do not move exactly the same as the past. Our system trading strategies include moving-average cross system, MACD cross system, trend-following system, buy dips and sell rallies system, DMI system, Keltner channel system, Bollinger Bands system, and Fibonacci system. These strategies are well known and frequently being used by many professional traders. We program these strategies for generating automated system signals for entry and exit. We propose SVM-based strategies selection system and portfolio construction and order routing system. Strategies selection system is a portfolio training system. It generates training data and makes SVM model using optimal portfolio. We make $m{\times}n$ data matrix by dividing KOSPI 200 index futures data with a same period. Optimal strategy portfolio is derived from analyzing each strategy performance. SVM model is generated based on this data and optimal strategy portfolio. We use 80% of the data for training and the remaining 20% is used for testing the strategy. For training, we select two strategies which show the highest profit in the next day. Selection method 1 selects two strategies and method 2 selects maximum two strategies which show profit more than 0.1 point. We use one-against-all method which has fast processing time. We analyse the daily data of KOSPI 200 index futures contracts from January 1990 to November 2011. Price change rates for 50 days are used as SVM input data. The training period is from January 1990 to March 2007 and the test period is from March 2007 to November 2011. We suggest three benchmark strategies portfolio. BM1 holds two contracts of KOSPI 200 index futures for testing period. BM2 is constructed as two strategies which show the largest cumulative profit during 30 days before testing starts. BM3 has two strategies which show best profits during testing period. Trading cost include brokerage commission cost and slippage cost. The proposed strategy portfolio management system shows profit more than double of the benchmark portfolios. BM1 shows 103.44 point profit, BM2 shows 488.61 point profit, and BM3 shows 502.41 point profit after deducting trading cost. The best benchmark is the portfolio of the two best profit strategies during the test period. The proposed system 1 shows 706.22 point profit and proposed system 2 shows 768.95 point profit after deducting trading cost. The equity curves for the entire period show stable pattern. With higher profit, this suggests a good trading direction for system traders. We can make more stable and more profitable portfolios if we add money management module to the system.

Prediction of a hit drama with a pattern analysis on early viewing ratings (초기 시청시간 패턴 분석을 통한 대흥행 드라마 예측)

  • Nam, Kihwan;Seong, Nohyoon
    • Journal of Intelligence and Information Systems
    • /
    • 제24권4호
    • /
    • pp.33-49
    • /
    • 2018
  • The impact of TV Drama success on TV Rating and the channel promotion effectiveness is very high. The cultural and business impact has been also demonstrated through the Korean Wave. Therefore, the early prediction of the blockbuster success of TV Drama is very important from the strategic perspective of the media industry. Previous studies have tried to predict the audience ratings and success of drama based on various methods. However, most of the studies have made simple predictions using intuitive methods such as the main actor and time zone. These studies have limitations in predicting. In this study, we propose a model for predicting the popularity of drama by analyzing the customer's viewing pattern based on various theories. This is not only a theoretical contribution but also has a contribution from the practical point of view that can be used in actual broadcasting companies. In this study, we collected data of 280 TV mini-series dramas, broadcasted over the terrestrial channels for 10 years from 2003 to 2012. From the data, we selected the most highly ranked and the least highly ranked 45 TV drama and analyzed the viewing patterns of them by 11-step. The various assumptions and conditions for modeling are based on existing studies, or by the opinions of actual broadcasters and by data mining techniques. Then, we developed a prediction model by measuring the viewing-time distance (difference) using Euclidean and Correlation method, which is termed in our study similarity (the sum of distance). Through the similarity measure, we predicted the success of dramas from the viewer's initial viewing-time pattern distribution using 1~5 episodes. In order to confirm that the model is shaken according to the measurement method, various distance measurement methods were applied and the model was checked for its dryness. And when the model was established, we could make a more predictive model using a grid search. Furthermore, we classified the viewers who had watched TV drama more than 70% of the total airtime as the "passionate viewer" when a new drama is broadcasted. Then we compared the drama's passionate viewer percentage the most highly ranked and the least highly ranked dramas. So that we can determine the possibility of blockbuster TV mini-series. We find that the initial viewing-time pattern is the key factor for the prediction of blockbuster dramas. From our model, block-buster dramas were correctly classified with the 75.47% accuracy with the initial viewing-time pattern analysis. This paper shows high prediction rate while suggesting audience rating method different from existing ones. Currently, broadcasters rely heavily on some famous actors called so-called star systems, so they are in more severe competition than ever due to rising production costs of broadcasting programs, long-term recession, aggressive investment in comprehensive programming channels and large corporations. Everyone is in a financially difficult situation. The basic revenue model of these broadcasters is advertising, and the execution of advertising is based on audience rating as a basic index. In the drama, there is uncertainty in the drama market that it is difficult to forecast the demand due to the nature of the commodity, while the drama market has a high financial contribution in the success of various contents of the broadcasting company. Therefore, to minimize the risk of failure. Thus, by analyzing the distribution of the first-time viewing time, it can be a practical help to establish a response strategy (organization/ marketing/story change, etc.) of the related company. Also, in this paper, we found that the behavior of the audience is crucial to the success of the program. In this paper, we define TV viewing as a measure of how enthusiastically watching TV is watched. We can predict the success of the program successfully by calculating the loyalty of the customer with the hot blood. This way of calculating loyalty can also be used to calculate loyalty to various platforms. It can also be used for marketing programs such as highlights, script previews, making movies, characters, games, and other marketing projects.

Spectral Induced Polarization Characteristics of Rocks in Gwanin Vanadiferous Titanomagnetite (VTM) Deposit (관인 함바나듐 티탄철광상 암석의 광대역 유도분극 특성)

  • Shin, Seungwook
    • Geophysics and Geophysical Exploration
    • /
    • 제24권4호
    • /
    • pp.194-201
    • /
    • 2021
  • Induced polarization (IP) effect is known to be caused by electrochemical phenomena at interface between minerals and pore water. Spectral induced polarization (SIP) method is an electrical survey to localize subsurface IP anomalies while injecting alternating currents of multiple frequencies into the ground. This method was effectively applied to mineral exploration of various ore deposits. Titanomagnetite ores were being produced by a mining company located in Gonamsan area, Gwanin-myeon, Pocheon-si, Gyeonggi-do, South Korea. Because the ores contain more than 0.4 w% vanadium, the ore deposit is called as Gwanin vanadiferous titanomagnetite (VTM) deposit. The vanadium is the most important of materials in production of vanadium redox flow batteries, which can be appropriately used for large-scale energy storage system. Systematic mineral exploration was conducted to identify presence of hidden VTM orebodies and estimate their potential resources. In geophysical exploration, laboratory geophysical measurement of rock samples is helpful to generate reliable property models from field survey data. Therefore, we performed laboratory SIP data of the rocks from the Gwanin VTM deposit to understand SIP characteristics between ores and host rocks and then demonstrate the applicability of this method for the mineral exploration. Both phase and resistivity spectra of the ores sampled from underground outcrop and drilling cores were different of those of the host rocks consisting of monzodiorite and quartz monzodiorite. Because the phase and resistivity at frequencies below 100 Hz are mainly dependent on the SIP characteristics of the rocks, we calculated mean values of the ores and the host rocks. The average phase values at 0.1 Hz were ores: -369 mrad and host rocks: -39 mrad. The average resistivity values at 0.1 Hz were ores: 16 Ωm and host rocks: 2,623 Ωm. Because the SIP characteristics of the ores were different of those of the host rocks, we considered that the SIP survey is effective for the mineral exploration in vanadiferous titanomagnetite deposits and the SIP characteristics are useful for interpreting field survey data.

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • 제25권2호
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.