• Title/Summary/Keyword: No-code data analysis

Search Result 96, Processing Time 0.05 seconds

Analysis on Trends of No-Code Machine Learning Tools

  • Yo-Seob, Lee;Phil-Joo, Moon
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.412-419
    • /
    • 2022
  • The amount of digital text data is growing exponentially, and many machine learning solutions are being used to monitor and manage this data. Artificial intelligence and machine learning are used in many areas of our daily lives, but the underlying processes and concepts are not easy for most people to understand. At a time when many experts are needed to run a machine learning solution, no-code machine learning tools are a good solution. No-code machine learning tools is a platform that enables machine learning functions to be performed without engineers or developers. The latest No-Code machine learning tools run in your browser, so you don't need to install any additional software, and the simple GUI interface makes them easy to use. Using these platforms can save you a lot of money and time because there is less skill and less code to write. No-Code machine learning tools make it easy to understand artificial intelligence and machine learning. In this paper, we examine No-Code machine learning tools and compare their features.

Claydoy-basedTest Report Automation and No-Code Data Analysis (클레이독스 기반의 시험성적서 자동화 및 노코드 데이터 분석법의 활용 연구)

  • Kim, Jong Jin;Yoo, Dong Hee
    • The Journal of Information Systems
    • /
    • v.33 no.3
    • /
    • pp.147-170
    • /
    • 2024
  • Purpose This study aims to propose an automated test report management method using Claydox and ChatGPT to achieve digital transformation and carbon neutrality in the testing and certification industry. Focusing on the water quality sector, this study also explores the potential applicability of the proposed method to other environmental fields. Design/methodology/approach The test report processing was digitized using Claydox and ChatGPT. The process consists of three stages: 1) automated data entry from the photographic recording of raw data, 2) test report creation, and 3) automated report analysis. Additionally, the study demonstrated the potential for real-time data analysis using no-code platforms. Findings The study found that the test report processing using Claydox significantly improved efficiency. It maintained data consistency, greatly enhanced work efficiency, and reduced time and costs. The use of electronic documents contributed to reducing greenhouse gas emissions, resulting in positive environmental impacts. Furthermore, legal compliance and post-management preparedness were strengthened, providing reliable evidence in case of legal disputes.

Priority Analysis for Software Functions Using Social Network Analysis and DEA(Data Envelopment Analysis) (사회연결망 분석과 자료포락분석 기법을 이용한 소프트웨어 함수 우선순위 분석 연구)

  • Huh, Sang Moo;Kim, Woo Je
    • Journal of Information Technology Services
    • /
    • v.17 no.3
    • /
    • pp.171-189
    • /
    • 2018
  • To remove software defects and improve performance of software, many developers perform code inspections and use static analysis tools. A code inspection is an activity that is performed manually to detect software defects in the developed source. However, there is no clear criterion which source codes are inspected. A static analysis tool can automatically detect software defects by analyzing the source codes without running the source codes. However, it has disadvantage that analyzes only the codes in the functions without analyzing the relations among source functions. The functions in the source codes are interconnected and formed a social network. Functions that occupy critical locations in a network can be important enough to affect the overall quality. Whereas, a static analysis tool merely suggests which functions were called several times. In this study, the core functions will be elicited by using social network analysis and DEA (Data Envelopment Analysis) for CUBRID open database sources. In addition, we will suggest clear criteria for selecting the target sources for code inspection and will suggest ways to find core functions to minimize defects and improve performance.

Development of the Liberal Arts Course for Informatics, Mathematics, and Science Convergence Education using No Code Data Analysis Tool (노 코드 데이터 분석 도구를 활용한 정보·수학·과학 융합교육 교양 강좌 개발)

  • Soyul Yi;Youngjun Lee
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.01a
    • /
    • pp.447-448
    • /
    • 2023
  • 본 연구에서는 비전공자들을 위한 디지털 교육을 위하여 노 코드 프로그램을 활용한 정보, 수학, 과학 융합교육 교양 강좌를 개발하였다. 노 코드 프로그램으로는 오렌지3 데이터 마이닝을 선정하였는데, 이는 데이터 분석, 시각화, 머신러닝 모델의 활용이 용이하다는 강점을 가지고 있다. 또한, 산업환경 변화에 대비하는 핵심 교과인 과학, 수학, 정보의 중요성과 데이터 분석과의 밀접성을 고려하여 교육 내용을 융합할 수 있도록 선정하였다. 개발된 교육 프로그램은 8인이 전문가 검토 결과 내용 타당도가 확보되었음을 확인할 수 있었다. 추후 연구에서는 이 강좌를 대학의 학부생에게 적용하여 그 효과성을 확인해 보고자 한다.

  • PDF

Analysis of the performances of the CFD schemes used for coupling computation

  • Chen, Guangliang;Jiang, Hongwei;Kang, Huilun;Ma, Rui;Li, Lei;Yu, Yang;Li, Xiaochang
    • Nuclear Engineering and Technology
    • /
    • v.53 no.7
    • /
    • pp.2162-2173
    • /
    • 2021
  • In this paper, the coupling of fine-mesh computational fluid dynamics (CFD) thermal-hydraulics (TH) code and neutronics code is achieved using the Ansys Fluent User Defined Function (UDF) for code development, including parallel meshing mapping, data computation, and data transfer. Also, some CFD schemes are designed for mesh mapping and data transfer to guarantee physical conservation in the coupling computation. Because there is no rigorous research that gives robust guidance on the various CFD schemes that must be obtained before the fine-mesh coupling computation, this work presents a quantitative analysis of the CFD meshing and mapping schemes to improve the accuracy of the value and location of key physical prediction. Furthermore, the effect of the sub-pin scale coupling computation is also studied. It is observed that even the pin-resolved coupling computation can also create a large deviation in the maximum value and spatial locations, which also proves the significance of the research on mesh mapping and data transfer for CFD code in a coupling computation.

System Analysis for the Automated Circulation (대출업무 자동화를 위한 시스팀설계에 관한 연구)

  • Kim, Kwang-Yeong
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.4 no.1
    • /
    • pp.85-102
    • /
    • 1980
  • Accepting the necessity for maintaining the objectives of the existing circulation system, the computer-based system could be designed by the system analyst and librarians to gain a variety of improvements in the maintenance, accessibility of circulation records and more meaningful statistical records. If the terminal can be operated on-line, then this circulation data is transmitted directly to the computer, where it may update to the circulation file immediately or alternatively be kept in direct access file for updating in batch mode. on-line system in the circulation operations is "data-collection system" and "Bar-coded label system" Bar-coded label system is simple, quick, and error-free input of data. Attached to CRT terminal is a "light pen" which is hand held and will read a bar-coded label as the pen is passed over the labels (one affixed to the book itself, other carried on the borrower cards). Instantaneously the data concerning transaction is stored in the central mini-computer. It is useful, economical for us to co-operate many libraries in Korea and design borrower's ID code, book no., classification code in the Bar-coded label system by the members of the computer center and the library staff at every stage. As for book loan, the borrowers ID code, book number and classification code are scanned by the bar-code scanner or light pen and the computer decides whether to loan and store the data. The visual display unit shows the present status of a borrowers borrowing and decides whether borrower can borrow.

  • PDF

An Evaluation of ACI 349 Code for Shear Design of CIP Anchor (직매형 앵커기초의 전단설계를 위한 ACI 349 Code의 평가)

  • Jang Jung-Bum;Hwang Kyeong-Min;Suh Yong-Pyo
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2005.04a
    • /
    • pp.464-470
    • /
    • 2005
  • The numerical analysis is carried out to identify the influence of design factors to shear capacity of cast-in-place (CIP) anchor in ACI 349 Code that is available for the design of fastening system at Nuclear Power Plant (NPP) in this study. The MASA program is used to develop the numerical analysis model and the developed numerical analysis model is verified on a basis of the various test data of CIP anchor. Both $l/d_o$ and $c_1/l$ we considered as design factors. As a result, the variation of $l/d_o$ has no influence on the shear capacity of CIP anchor but $c_1/l$ has a large influence on the shear capacity of CIP anchor, Therefore, it is proved that ACI 349 Code may give a non-conservative results compared with real shear capacity of CIP anchor according to $c_1/l$.

  • PDF

A Study on the Classification of Variables Affecting Smartphone Addiction in Decision Tree Environment Using Python Program

  • Kim, Seung-Jae
    • International journal of advanced smart convergence
    • /
    • v.11 no.4
    • /
    • pp.68-80
    • /
    • 2022
  • Since the launch of AI, technology development to implement complete and sophisticated AI functions has continued. In efforts to develop technologies for complete automation, Machine Learning techniques and deep learning techniques are mainly used. These techniques deal with supervised learning, unsupervised learning, and reinforcement learning as internal technical elements, and use the Big-data Analysis method again to set the cornerstone for decision-making. In addition, established decision-making is being improved through subsequent repetition and renewal of decision-making standards. In other words, big data analysis, which enables data classification and recognition/recognition, is important enough to be called a key technical element of AI function. Therefore, big data analysis itself is important and requires sophisticated analysis. In this study, among various tools that can analyze big data, we will use a Python program to find out what variables can affect addiction according to smartphone use in a decision tree environment. We the Python program checks whether data classification by decision tree shows the same performance as other tools, and sees if it can give reliability to decision-making about the addictiveness of smartphone use. Through the results of this study, it can be seen that there is no problem in performing big data analysis using any of the various statistical tools such as Python and R when analyzing big data.

A study on non­storage data recording system and non­storage data providing method by smart QR code (스마트한 QR코드에 의한 비저장식 데이터 기록 시스템 및 비저장식 데이터 제공방법에 관한 연구)

  • Oh, Eun-Yeol
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.4
    • /
    • pp.14-20
    • /
    • 2019
  • The purpose of this paper is to present a smart QR code recording system and a method of non­storage data delivery that transforms the original data into a form of URL information by encrypting it and encoding the URL information into a QR code so that the QR code can be written and decrypted in a medium without storing the original data. The method of the study was presented by the prior art study and the literature research. Analysis results show that the system is built on the online administration server. The data input signal matching secret code is stored in DB, the QR code generation command converts input data from the password DB to the password information combined into the subordinate locator of the admin server's domain name, URL code. Therefore, the smart QR method of data management (recording and providing) indicates that there are no limitations in the ease and space of use or obstacles to capacity use.

CFD/RELAP5 coupling analysis of the ISP No. 43 boron dilution experiment

  • Ye, Linrong;Yu, Hao;Wang, Mingjun;Wang, Qianglong;Tian, Wenxi;Qiu, Suizheng;Su, G.H.
    • Nuclear Engineering and Technology
    • /
    • v.54 no.1
    • /
    • pp.97-109
    • /
    • 2022
  • Multi-dimensional coupling analysis is a research hot spot in nuclear reactor thermal hydraulic study and both the full-scale system transient response and local key three-dimensional thermal hydraulic phenomenon could be obtained simultaneously, which can achieve the balance between efficiency and accuracy in the numerical simulation of nuclear reactor. A one-dimensional to three-dimensional (1D-3D) coupling platform for the nuclear reactor multi-dimensional analysis is developed by XJTU-NuTheL (Nuclear Thermal-hydraulic Laboratory at Xi'an Jiaotong University) based on the CFD code Fluent and system code RELAP5 through the Dynamic Link Library (DLL) technology and Fluent user-defined functions (UDF). In this paper, the International Standard Problem (ISP) No. 43 is selected as the benchmark and the rapid boron dilution transient in the nuclear reactor is studied with the coupling code. The code validation is conducted first and the numerical simulation results show good agreement with the experimental data. The three-dimensional flow and temperature fields in the downcomer are analyzed in detail during the transient scenarios. The strong reverse flow is observed beneath the inlet cold leg, causing the de-borated water slug to mainly diffuse in the circumferential direction. The deviations between the experimental data and the transients predicted by the coupling code are also discussed.