• Title/Summary/Keyword: Intelligent Techniques

Search Result 972, Processing Time 0.032 seconds

Evaluation of Compaction Impact According to Compaction Roller Operating Conditions through CMV Analysis (CMV 분석을 통한 다짐롤러 운용 조건에 따른 다짐 영향 평가)

  • Kim, Jinyoung;Baek, Sungha;Kim, Namgyu;Choi, Changho;Kim, Jisun;Cho, Jinwoo
    • Journal of the Korean GEO-environmental Society
    • /
    • v.23 no.8
    • /
    • pp.11-16
    • /
    • 2022
  • The compaction process using vibrating rollers in road construction is essential to increase soil stiffness in earthworks. Currently, there is no clear standard for the operation method of the compaction roller during compaction. Although simple quality inspection techniques have been developed, plate load test (PLT) and field density test (FDT) are the most frequently used test methods to evaluate the degree of compaction during road construction as the most frequently used quality inspection methods. However, both inspection methods are inefficient because they cannot perform quality inspection in all sections due to time and cost reasons. In this study, we analyzed how the operating conditions of vibrating rollers affect the compaction quality. An intelligent quality management system, which is a currently developed and commercialized technology, was used to obtain quality inspection results in all sections. As a result of the test, it was analyzed that the speed and vibration direction of the compaction roller had an effect on the compaction degree, and it was found that the compaction direction had no effect on the compaction degree.

Characterization of Deep Learning-Based and Hybrid Iterative Reconstruction for Image Quality Optimization at Computer Tomography Angiography (전산화단층촬영조영술에서 화질 최적화를 위한 딥러닝 기반 및 하이브리드 반복 재구성의 특성분석)

  • Pil-Hyun, Jeon;Chang-Lae, Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.1
    • /
    • pp.1-9
    • /
    • 2023
  • For optimal image quality of computer tomography angiography (CTA), different iodine concentrations and scan parameters were applied to quantitatively evaluate the image quality characteristics of filtered back projection (FBP), hybrid-iterative reconstruction (hybrid-IR), and deep learning reconstruction (DLR). A 320-row-detector CT scanner scanned a phantom with various iodine concentrations (1.2, 2.9, 4.9, 6.9, 10.4, 14.3, 18.4, and 25.9 mg/mL) located at the edge of a cylindrical water phantom with a diameter of 19 cm. Data obtained using each reconstruction technique was analyzed through noise, coefficient of variation (COV), and root mean square error (RMSE). As the iodine concentration increased, the CT number value increased, but the noise change did not show any special characteristics. COV decreased with increasing iodine concentration for FBP, adaptive iterative dose reduction (AIDR) 3D, and advanced intelligent clear-IQ engine (AiCE) at various tube voltages and tube currents. In addition, when the iodine concentration was low, there was a slight difference in COV between the reconstitution techniques, but there was little difference as the iodine concentration increased. AiCE showed the characteristic that RMSE decreased as the iodine concentration increased but rather increased after a specific concentration (4.9 mg/mL). Therefore, the user will have to consider the characteristics of scan parameters such as tube current and tube voltage as well as iodine concentration according to the reconstruction technique for optimal CTA image acquisition.

Study on Time-of-day Operation of Pedestrian Signal Based on Residual Pedestrians (잔류보행기반 시간대별 보행신호 운영기법 연구)

  • Chae, HeeChul;Eom, Daelyoung;Yun, Ilsoo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.4
    • /
    • pp.1-17
    • /
    • 2022
  • As pedestrian deaths account for a high proportion of traffic accident deaths in Korea, interest in pedestrian safety is growing. In particular, it is necessary to develop various pedestrian-centered traffic signal operation techniques to improve the pedestrian environment at signal intersections. Therefore, in this study, a method for time-of-day operating a pedestrian signal based on residual pedestrians was studied. To this end, the pedestrian signal operation technique in response to the pedestrian demand, which is operated by extending the pedestrian signal time only during the time when the pedestrian demand and the number of remaining pedestrians increase, was applied to the field. The difference in safety according to the application of the new pedestrian signal operation technique was statistically analyzed. As a result of the analysis, the residual pedestrian rate decreased by 20% (3.3 people) before application and 8% (1.4 people) after application, and the residual pedestrian rate in the crosswalk at the time of red signal decreased by 12% (1.9 people), And it was analyzed that the position of the residual pedestrian decreased by 3.3m from 5.2m before application to 1.9m after application.

A Study on the Capacity Review of One-lane Hi-pass Lanes on Highways : Focusing on Using Bootstrapping Techniques (고속도로 단차로 하이패스차로 용량 검토에 관한 연구 : 부트스트랩 기법 활용 중심으로)

  • Bosung Kim;Donghee Han
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.3
    • /
    • pp.1-16
    • /
    • 2024
  • In the present highway design guidelines suggest that the capacity of one-lane hi-pass lanes is 2,000 veh/h for mainline toll plaza and 1,700 veh/h for interchange toll plaza. However, in a study conducted in early 2010, capacity of the mainline toll plaza was presented with 1,476 veh/h/ln to 1,665 veh/h/ln, and capacity of the interchange toll plaza was presented as 1,443 veh/h/ln. Accordingly, this study examined the feasibility of the capacity of the currently proposed highway one-lane hi-pass lane. Based on the 2021 individual vehicle passing data collected from the one-lane hi-pass gantry, the speed-traffic volume relationship graph and headway were used to calculate and compare capacity. In addition, the bootstrapping technique was introduced to utilize the headway and new processing methods for collected data were reviewed. As a result of the analysis, the one-lane hi-pass capacity could be estimated at 1,700 veh/h/ln for the interchange toll plaza, and at least 1,700 veh/h/ln for the mainline toll plaza. In addition, by using the bootstrap technique when using headway data, it was possible to present an estimated capacity similar to the observed capacity.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

A Study on an Adaptive Guidance Plan by Quickest Path Algorithm for Building Evacuations due to Fire (건물 화재시 Quickest Path를 이용한 Adaptive 피난경로 유도방안)

  • Sin, Seong-Il;Seo, Yong-Hui;Lee, Chang-Ju
    • Journal of Korean Society of Transportation
    • /
    • v.25 no.6
    • /
    • pp.197-208
    • /
    • 2007
  • Enormously sized buildings are appearing world-wide with the advancement of construction techniques. Large-scaled and complicated structures will have increased difficulties for dealing with safety, and will demand well-matched safety measures. This research introduced up-to-date techniques and systems which are applied in buildings in foreign nations. Furthermore, it proposed s direct guidance plan for buildings in case of fire. Since it is possible to install wireless sensor networks which detect fires or effects of fire, the plan makes use of this information. Accordingly, the authors completed a direct guidance plan that was based on omnidirectional guidance lights. It is possible to select a route with concern about both time and capacity with a concept of a non-dominated path. Finally, case studies showed that quickest path algorithms were effective for guiding efficient dispersion routes and in case of restriction of certain links in preferred paths due to temperature and smoke, it was possible to avoid relevant links and to restrict demand in the network application. Consequently, the algorithms were able to maximize safety and minimize evacuation time, which were the purposes of this study.

An Empirical Study for Performance Evaluation of Web Personalization Assistant Systems (웹 기반 개인화 보조시스템 성능 평가를 위한 실험적 연구)

  • Kim, Ki-Bum;Kim, Seon-Ho;Weon, Sung-Hyun
    • The Journal of Society for e-Business Studies
    • /
    • v.9 no.3
    • /
    • pp.155-167
    • /
    • 2004
  • At this time, the two main techniques for achieving web personalization assistant systems generally concern direct manipulation and software agents. While both direct manipulation and software agents are intended for permitting user to complete tasks rapidly, efficiently, and easily, their methodologies are different. The central debate involving these web personalization techniques originates from the amount of control that each allows to, or holds back from, the users. Direct manipulation can provide users with comprehensibel, predictable and controllable user interfaces that give them a feeling of accomplishnent and responsibility. On the other hand, the intelligent software components, the agents, can assist users with artificial intelligence by monitoring or retrieving personal histories or behaviors. In this empirical study, two web personalization assistant systems are evaluated. One of them, WebPersonalizer, is an agent based user personalization tool; the other, AntWorld, is a collaborative recommendation tool which provides direct manipulation interfaces. Through this empirical study, we have focused on two different paradigms as web personalization assistant systems : direct manipulation and software agents. Each approach has its own advantages and disadvantages. We also provide the experimental result that is worth referring for developers of electronic commerce system and suggest the methodologies for conveniently retrieving necessary information based on their personal needs.

  • PDF

Prediction of Divided Traffic Demands Based on Knowledge Discovery at Expressway Toll Plaza (지식발견 기반의 고속도로 영업소 분할 교통수요 예측)

  • Ahn, Byeong-Tak;Yoon, Byoung-Jo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.36 no.3
    • /
    • pp.521-528
    • /
    • 2016
  • The tollbooths of a main motorway toll plaza are usually operated proactively responding to the variations of traffic demands of two-type vehicles, i.e. cars and the other (heavy) vehicles, respectively. In this vein, it is one of key elements to forecast accurate traffic volumes for the two vehicle types in advanced tollgate operation. Unfortunately, it is not easy for existing univariate short-term prediction techniques to simultaneously generate the two-vehicle-type traffic demands in literature. These practical and academic backgrounds make it one of attractive research topics in Intelligent Transportation System (ITS) forecasting area to forecast the future traffic volumes of the two-type vehicles at an acceptable level of accuracy. In order to address the shortcomings of univariate short-term prediction techniques, a Multiple In-and-Out (MIO) forecasting model to simultaneously generate the two-type traffic volumes is introduced in this article. The MIO model based on a non-parametric approach is devised under the on-line access conditions of large-scale historical data. In a feasible test with actual data, the proposed model outperformed Kalman filtering, one of a widely-used univariate models, in terms of prediction accuracy in spite of multivariate prediction scheme.

Empirical Research on Search model of Web Service Repository (웹서비스 저장소의 검색기법에 관한 실증적 연구)

  • Hwang, You-Sub
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.173-193
    • /
    • 2010
  • The World Wide Web is transitioning from being a mere collection of documents that contain useful information toward providing a collection of services that perform useful tasks. The emerging Web service technology has been envisioned as the next technological wave and is expected to play an important role in this recent transformation of the Web. By providing interoperable interface standards for application-to-application communication, Web services can be combined with component-based software development to promote application interaction and integration within and across enterprises. To make Web services for service-oriented computing operational, it is important that Web services repositories not only be well-structured but also provide efficient tools for an environment supporting reusable software components for both service providers and consumers. As the potential of Web services for service-oriented computing is becoming widely recognized, the demand for an integrated framework that facilitates service discovery and publishing is concomitantly growing. In our research, we propose a framework that facilitates Web service discovery and publishing by combining clustering techniques and leveraging the semantics of the XML-based service specification in WSDL files. We believe that this is one of the first attempts at applying unsupervised artificial neural network-based machine-learning techniques in the Web service domain. We have developed a Web service discovery tool based on the proposed approach using an unsupervised artificial neural network and empirically evaluated the proposed approach and tool using real Web service descriptions drawn from operational Web services repositories. We believe that both service providers and consumers in a service-oriented computing environment can benefit from our Web service discovery approach.

Facilitating Web Service Taxonomy Generation : An Artificial Neural Network based Framework, A Prototype Systems, and Evaluation (인공신경망 기반 웹서비스 분류체계 생성 프레임워크의 실증적 평가)

  • Hwang, You-Sub
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.33-54
    • /
    • 2010
  • The World Wide Web is transitioning from being a mere collection of documents that contain useful information toward providing a collection of services that perform useful tasks. The emerging Web service technology has been envisioned as the next technological wave and is expected to play an important role in this recent transformation of the Web. By providing interoperable interface standards for application-to-application communication, Web services can be combined with component based software development to promote application interaction both within and across enterprises. To make Web services for service-oriented computing operational, it is important that Web service repositories not only be well-structured but also provide efficient tools for developers to find reusable Web service components that meet their needs. As the potential of Web services for service-oriented computing is being widely recognized, the demand for effective Web service discovery mechanisms is concomitantly growing. A number of public Web service repositories have been proposed, but the Web service taxonomy generation has not been satisfactorily addressed. Unfortunately, most existing Web service taxonomies are either too rudimentary to be useful or too hard to be maintained. In this paper, we propose a Web service taxonomy generation framework that combines an artificial neural network based clustering techniques with descriptive label generating and leverages the semantics of the XML-based service specification in WSDL documents. We believe that this is one of the first attempts at applying data mining techniques in the Web service discovery domain. We have developed a prototype system based on the proposed framework using an unsupervised artificial neural network and empirically evaluated the proposed approach and tool using real Web service descriptions drawn from operational Web service repositories. We report on some preliminary results demonstrating the efficacy of the proposed approach.