• Title/Summary/Keyword: AI 기법

Search Result 582, Processing Time 0.025 seconds

Comparative Study of Automatic Trading and Buy-and-Hold in the S&P 500 Index Using a Volatility Breakout Strategy (변동성 돌파 전략을 사용한 S&P 500 지수의 자동 거래와 매수 및 보유 비교 연구)

  • Sunghyuck Hong
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.57-62
    • /
    • 2023
  • This research is a comparative analysis of the U.S. S&P 500 index using the volatility breakout strategy against the Buy and Hold approach. The volatility breakout strategy is a trading method that exploits price movements after periods of relative market stability or concentration. Specifically, it is observed that large price movements tend to occur more frequently after periods of low volatility. When a stock moves within a narrow price range for a while and then suddenly rises or falls, it is expected to continue moving in that direction. To capitalize on these movements, traders adopt the volatility breakout strategy. The 'k' value is used as a multiplier applied to a measure of recent market volatility. One method of measuring volatility is the Average True Range (ATR), which represents the difference between the highest and lowest prices of recent trading days. The 'k' value plays a crucial role for traders in setting their trade threshold. This study calculated the 'k' value at a general level and compared its returns with the Buy and Hold strategy, finding that algorithmic trading using the volatility breakout strategy achieved slightly higher returns. In the future, we plan to present simulation results for maximizing returns by determining the optimal 'k' value for automated trading of the S&P 500 index using artificial intelligence deep learning techniques.

A Study on the evaluation technique rubric suitable for the characteristics of digital design subject (디지털 디자인 과목의 특성에 적합한 평가기법 루브릭에 관한 연구)

  • Cho, Hyun Kyung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.6
    • /
    • pp.525-530
    • /
    • 2023
  • Digital drawing subjects require the subdivision of evaluation elements and the graduality of evaluation according to the recent movement of the innovative curriculum. The purpose of this paper is to present the criteria for evaluating the drawing and to propose it as a rubric evaluation. In the text, criteria for beginner evaluation were technical skills such as the accuracy and consistency of the line, the ratio and balance of the picture, and the ability to effectively utilize various brushes and tools at the intermediate levels. In the advanced evaluation section, it is a part of a new perspective or originality centered on creativity and originality, and a unique perspective or interpretation of a given subject. In addition, as an understanding of design principles, the evaluation of completeness was derived focusing on the ability to actively utilize various functions of digital drawing software through design principles such as placement, color, and shape. The importance of introducing rubric evaluation is to allow instructors to make objective and consistent evaluations, and the key to research in rubric evaluation in these art subjects is to help learners clearly grasp their strengths and weaknesses, and learners can identify what needs to be improved and develop better drawing skills accordingly through feedback on each item.

Approaches to Applying Social Network Analysis to the Army's Information Sharing System: A Case Study (육군 정보공유체계에 사회관계망 분석을 적용하기 위한방안: 사례 연구)

  • GunWoo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.597-603
    • /
    • 2023
  • The paradigm of military operations has evolved from platform-centric warfare to network-centric warfare and further to information-centric warfare, driven by advancements in information technology. In recent years, with the development of cutting-edge technologies such as big data, artificial intelligence, and the Internet of Things (IoT), military operations are transitioning towards knowledge-centric warfare (KCW), based on artificial intelligence. Consequently, the military places significant emphasis on integrating advanced information and communication technologies (ICT) to establish reliable C4I (Command, Control, Communication, Computer, Intelligence) systems. This research emphasizes the need to apply data mining techniques to analyze and evaluate various aspects of C4I systems, including enhancing combat capabilities, optimizing utilization in network-based environments, efficiently distributing information flow, facilitating smooth communication, and effectively implementing knowledge sharing. Data mining serves as a fundamental technology in modern big data analysis, and this study utilizes it to analyze real-world cases and propose practical strategies to maximize the efficiency of military command and control systems. The research outcomes are expected to provide valuable insights into the performance of C4I systems and reinforce knowledge-centric warfare in contemporary military operations.

A Study on the Drug Classification Using Machine Learning Techniques (머신러닝 기법을 이용한 약물 분류 방법 연구)

  • Anmol Kumar Singh;Ayush Kumar;Adya Singh;Akashika Anshum;Pradeep Kumar Mallick
    • Advanced Industrial SCIence
    • /
    • v.3 no.2
    • /
    • pp.8-16
    • /
    • 2024
  • This paper shows the system of drug classification, the goal of this is to foretell the apt drug for the patients based on their demographic and physiological traits. The dataset consists of various attributes like Age, Sex, BP (Blood Pressure), Cholesterol Level, and Na_to_K (Sodium to Potassium ratio), with the objective to determine the kind of drug being given. The models used in this paper are K-Nearest Neighbors (KNN), Logistic Regression and Random Forest. Further to fine-tune hyper parameters using 5-fold cross-validation, GridSearchCV was used and each model was trained and tested on the dataset. To assess the performance of each model both with and without hyper parameter tuning evaluation metrics like accuracy, confusion matrices, and classification reports were used and the accuracy of the models without GridSearchCV was 0.7, 0.875, 0.975 and with GridSearchCV was 0.75, 1.0, 0.975. According to GridSearchCV Logistic Regression is the most suitable model for drug classification among the three-model used followed by the K-Nearest Neighbors. Also, Na_to_K is an essential feature in predicting the outcome.

Crack detection in concrete using deep learning for underground facility safety inspection (지하시설물 안전점검을 위한 딥러닝 기반 콘크리트 균열 검출)

  • Eui-Ik Jeon;Impyeong Lee;Donggyou Kim
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.25 no.6
    • /
    • pp.555-567
    • /
    • 2023
  • The cracks in the tunnel are currently determined through visual inspections conducted by inspectors based on images acquired using tunnel imaging acquisition systems. This labor-intensive approach, relying on inspectors, has inherent limitations as it is subject to their subjective judgments. Recently research efforts have actively explored the use of deep learning to automatically detect tunnel cracks. However, most studies utilize public datasets or lack sufficient objectivity in the analysis process, making it challenging to apply them effectively in practical operations. In this study, we selected test datasets consisting of images in the same format as those obtained from the actual inspection system to perform an objective evaluation of deep learning models. Additionally, we introduced ensemble techniques to complement the strengths and weaknesses of the deep learning models, thereby improving the accuracy of crack detection. As a result, we achieved high recall rates of 80%, 88%, and 89% for cracks with sizes of 0.2 mm, 0.3 mm, and 0.5 mm, respectively, in the test images. In addition, the crack detection result of deep learning included numerous cracks that the inspector could not find. if cracks are detected with sufficient accuracy in a more objective evaluation by selecting images from other tunnels that were not used in this study, it is judged that deep learning will be able to be introduced to facility safety inspection.

Identifying Analog Gauge Needle Objects Based on Image Processing for a Remote Survey of Maritime Autonomous Surface Ships (자율운항선박의 원격검사를 위한 영상처리 기반의 아날로그 게이지 지시바늘 객체의 식별)

  • Hyun-Woo Lee;Jeong-Bin Yim
    • Journal of Navigation and Port Research
    • /
    • v.47 no.6
    • /
    • pp.410-418
    • /
    • 2023
  • Recently, advancements and commercialization in the field of maritime autonomous surface ships (MASS) has rapidly progressed. Concurrently, studies are also underway to develop methods for automatically surveying the condition of various on-board equipment remotely to ensure the navigational safety of MASS. One key issue that has gained prominence is the method to obtain values from analog gauges installed in various equipment through image processing. This approach has the advantage of enabling the non-contact detection of gauge values without modifying or changing already installed or planned equipment, eliminating the need for type approval changes from shipping classifications. The objective of this study was to identify a dynamically changing indicator needle within noisy images of analog gauges. The needle object must be identified because its position significantly affects the accurate reading of gauge values. An analog pressure gauge attached to an emergency fire pump model was used for image capture to identify the needle object. The acquired images were pre-processed through Gaussian filtering, thresholding, and morphological operations. The needle object was then identified through Hough Transform. The experimental results confirmed that the center and object of the indicator needle could be identified in images of noisy analog gauges. The findings suggest that the image processing method applied in this study can be utilized for shape identification in analog gauges installed on ships. This study is expected to be applicable as an image processing method for the automatic remote survey of MASS.

Class-Agnostic 3D Mask Proposal and 2D-3D Visual Feature Ensemble for Efficient Open-Vocabulary 3D Instance Segmentation (효율적인 개방형 어휘 3차원 개체 분할을 위한 클래스-독립적인 3차원 마스크 제안과 2차원-3차원 시각적 특징 앙상블)

  • Sungho Song;Kyungmin Park;Incheol Kim
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.7
    • /
    • pp.335-347
    • /
    • 2024
  • Open-vocabulary 3D point cloud instance segmentation (OV-3DIS) is a challenging visual task to segment a 3D scene point cloud into object instances of both base and novel classes. In this paper, we propose a novel model Open3DME for OV-3DIS to address important design issues and overcome limitations of the existing approaches. First, in order to improve the quality of class-agnostic 3D masks, our model makes use of T3DIS, an advanced Transformer-based 3D point cloud instance segmentation model, as mask proposal module. Second, in order to obtain semantically text-aligned visual features of each point cloud segment, our model extracts both 2D and 3D features from the point cloud and the corresponding multi-view RGB images by using pretrained CLIP and OpenSeg encoders respectively. Last, to effectively make use of both 2D and 3D visual features of each point cloud segment during label assignment, our model adopts a unique feature ensemble method. To validate our model, we conducted both quantitative and qualitative experiments on ScanNet-V2 benchmark dataset, demonstrating significant performance gains.

A Development of Flood Mapping Accelerator Based on HEC-softwares (HEC 소프트웨어 기반 홍수범람지도 엑셀러레이터 개발)

  • Kim, JongChun;Hwang, Seokhwan;Jeong, Jongho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.2
    • /
    • pp.173-182
    • /
    • 2024
  • In recent, there has been a trend toward primarily utilizing data-driven models employing artificial intelligence technologies, such as machine learning, for flood prediction. These data-driven models offer the advantage of utilizing pre-training results, significantly reducing the required simulation time. However, it remains that a considerable amount of flood data is necessary for the pre-training in data-driven models, while the available observed data for application is often insufficient. As an alternative, validated simulation results from physically-based models are being employed as pre-training data alongside observed data. In this context, we developed a flood mapping accelerator to generate flood maps for pre-training. The proposed accelerator automates the entire process of flood mapping, i.e., estimating flood discharge using HEC-1, calculating water surface levels using HEC-RAS, simulating channel overflow and generating flood maps using RAS Mapper. With the accelerator, users can easily prepare a database for pre-training of data-driven models from hundreds to tens of thousands of rainfall scenarios. It includes various convenient menus containing a Graphic User Interface(GUI), and its practical applicability has been validated across 26 test-beds.

Development of deep learning network based low-quality image enhancement techniques for improving foreign object detection performance (이물 객체 탐지 성능 개선을 위한 딥러닝 네트워크 기반 저품질 영상 개선 기법 개발)

  • Ki-Yeol Eom;Byeong-Seok Min
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.99-107
    • /
    • 2024
  • Along with economic growth and industrial development, there is an increasing demand for various electronic components and device production of semiconductor, SMT component, and electrical battery products. However, these products may contain foreign substances coming from manufacturing process such as iron, aluminum, plastic and so on, which could lead to serious problems or malfunctioning of the product, and fire on the electric vehicle. To solve these problems, it is necessary to determine whether there are foreign materials inside the product, and may tests have been done by means of non-destructive testing methodology such as ultrasound ot X-ray. Nevertheless, there are technical challenges and limitation in acquiring X-ray images and determining the presence of foreign materials. In particular Small-sized or low-density foreign materials may not be visible even when X-ray equipment is used, and noise can also make it difficult to detect foreign objects. Moreover, in order to meet the manufacturing speed requirement, the x-ray acquisition time should be reduced, which can result in the very low signal- to-noise ratio(SNR) lowering the foreign material detection accuracy. Therefore, in this paper, we propose a five-step approach to overcome the limitations of low resolution, which make it challenging to detect foreign substances. Firstly, global contrast of X-ray images are increased through histogram stretching methodology. Second, to strengthen the high frequency signal and local contrast, we applied local contrast enhancement technique. Third, to improve the edge clearness, Unsharp masking is applied to enhance edges, making objects more visible. Forth, the super-resolution method of the Residual Dense Block (RDB) is used for noise reduction and image enhancement. Last, the Yolov5 algorithm is employed to train and detect foreign objects after learning. Using the proposed method in this study, experimental results show an improvement of more than 10% in performance metrics such as precision compared to low-density images.

Meta-Analytic Approach to the Effects of Food Processing Treatment on Pesticide Residues in Agricultural Products (식품가공처리가 농산물 잔류농약에 미치는 영향에 대한 메타분석)

  • Kim, Nam Hoon;Park, Kyung Ai;Jung, So Young;Jo, Sung Ae;Kim, Yun Hee;Park, Hae Won;Lee, Jeong Mi;Lee, Sang Mi;Yu, In Sil;Jung, Kweon
    • The Korean Journal of Pesticide Science
    • /
    • v.20 no.1
    • /
    • pp.14-22
    • /
    • 2016
  • A trial of combining and quantifying the effects of food processing on various pesticides was carried out using a meta-analysis. In this study, weighted mean response ratios and confidence intervals about the reduction of pesticide residue levels in fruits and vegetables treated with various food processing techniques were calculated using a statistical tool of meta-analysis. The weighted mean response ratios for tap water washing, peeling, blanching (boiling) and oven drying were 0.52, 0.14, 0.34 and 0.46, respectively. Among the food processing methods, peeling showed the greatest effect on the reduction of pesticide residues. Pearsons's correlation coefficient (r=0.624) between weighted mean response ratios and octanolwater partition coefficients ($logP_{ow}$) for twelve pesticides processed with tap water washing was confirmed as having a positive correlation in the range of significance level of 0.05 (p=0.03). This means that a pesticide having the higher value of $logP_{ow}$ was observed as showing a higher weighted mean response ratio. These results could be used effectively as a reference data for processing factor in risk assessment and as an information for consumers on how to reduce pesticide residues in agricultural products.