• Title/Summary/Keyword: Unstructured task

Search Result 43, Processing Time 0.025 seconds

Exploring the Impact on Problem Solving Ability according to the Level of Structuring of Curriculum Tasks in PBL Activities (PBL 활동에서 교육과정 편성 과제의 구조화 정도가 문제해결력에 미치는 영향 탐색)

  • Lee, Eun-Chul
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.7
    • /
    • pp.282-291
    • /
    • 2020
  • This study was conducted to explore the effects of students' problem solving ability according to the level of structuring of curriculum tasks in PBL activities. The study was conducted on 60 college students in curriculum classes. Problem solving ability was measured at the beginning of the semester. And after the midterm exam, PBL activities were conducted. 30 experimental groups performed unstructured tasks. 30 comparison groups performed semi-structured problems. After the task was completed, problem solving ability was measured at the end of the semester. Collected data were analyzed using ANCOVA. As a result, it was verified that the experimental group had a statistically significant improvement in information collection, evaluation, and feedback level than the comparative group.

Recursive compensation algorithm application to the optimal edge selection

  • Chung, C.H.;Lee, K.S.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10b
    • /
    • pp.79-84
    • /
    • 1992
  • Path planning is an important task for optimal motion of a robot in structured or unstructured environment. The goal of this paper is to plan the optimal collision-free path in 3D, when a robot is navigated to pick up some tools or to repair some parts from various locations. To accomplish the goal, the Path Coordinator is proposed to have the capabilities of an obstacle avoidance strategy and a traveling salesman problem strategy (TSP). The obstacle avoidance strategy is to plan the shortest collision-free path between each pair of n locations in 2D or in 3D. The TSP strategy is to compute a minimal system cost of a tour that is defined as a closed path navigating each location exactly once. The TSP strategy can be implemented by the Hopfield Network. The obstacle avoidance strategy in 2D can be implemented by the VGraph Algorithm. However, the VGraph Algorithm is not useful in 3D, because it can't compute the global optimality in 3D. Thus, the Path Coordinator is used to solve this problem, having the capabilities of selecting the optimal edges by the modified Genetic Algorithm and computing the optimal nodes along the optimal edges by the Recursive Compensation Algorithm.

  • PDF

A Novel Classification Model for Efficient Patent Information Research (효율적인 특허정보 조사를 위한 분류 모형)

  • Kim, Youngho;Park, Sangsung;Jang, Dongsik
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.15 no.4
    • /
    • pp.103-110
    • /
    • 2019
  • A patent contains detailed information of the developed technology and is published to the public. Thus, patents can be used to overcome the limitations of traditional technology trend research and prediction techniques. Recently, due to the advantages of patented analytical methodology, IP R&D is carried out worldwide. The patent is big data and has a huge amount, various domains, and structured and unstructured data characteristics. For this reason, there are many difficulties in collecting and researching patent information. Patent research generally writes the Search formula to collect patent documents from DB. The collected patent documents contain some noise patents that are irrelevant to the purpose of analysis, so they are removed. However, eliminating noise patents is a manual task of reading and classifying technology, which is time consuming and expensive. In this study, we propose a model that automatically classifies The Noise patent for efficient patent information research. The proposed method performs Patent Embedding using Word2Vec and generates Noise seed label. In addition, noise patent classification is performed using the Random forest. The experimental data is published and registered with the USPTO among the patents related to Ocean Surveillance & Tracking Network technology. As a result of experimenting with the proposed model, it showed 73% accuracy with the label actually given by experts.

Stream-based Biomedical Classification Algorithms for Analyzing Biosignals

  • Fong, Simon;Hang, Yang;Mohammed, Sabah;Fiaidhi, Jinan
    • Journal of Information Processing Systems
    • /
    • v.7 no.4
    • /
    • pp.717-732
    • /
    • 2011
  • Classification in biomedical applications is an important task that predicts or classifies an outcome based on a given set of input variables such as diagnostic tests or the symptoms of a patient. Traditionally the classification algorithms would have to digest a stationary set of historical data in order to train up a decision-tree model and the learned model could then be used for testing new samples. However, a new breed of classification called stream-based classification can handle continuous data streams, which are ever evolving, unbound, and unstructured, for instance--biosignal live feeds. These emerging algorithms can potentially be used for real-time classification over biosignal data streams like EEG and ECG, etc. This paper presents a pioneer effort that studies the feasibility of classification algorithms for analyzing biosignals in the forms of infinite data streams. First, a performance comparison is made between traditional and stream-based classification. The results show that accuracy declines intermittently for traditional classification due to the requirement of model re-learning as new data arrives. Second, we show by a simulation that biosignal data streams can be processed with a satisfactory level of performance in terms of accuracy, memory requirement, and speed, by using a collection of stream-mining algorithms called Optimized Very Fast Decision Trees. The algorithms can effectively serve as a corner-stone technology for real-time classification in future biomedical applications.

Reinforcement Learning-based Search Trajectory Generation and Stiffness Tuning for Connector Assembly (커넥터 조립을 위한 강화학습 기반의 탐색 궤적 생성 및 로봇의 임피던스 강성 조절 방법)

  • Kim, Yong-Geon;Na, Minwoo;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.4
    • /
    • pp.455-462
    • /
    • 2022
  • Since electric connectors such as power connectors have a small assembly tolerance and have a complex shape, the assembly process is performed manually by workers. Especially, it is difficult to overcome the assembly error, and the assembly takes a long time due to the error correction process, which makes it difficult to automate the assembly task. To deal with this problem, a reinforcement learning-based assembly strategy using contact states was proposed to quickly perform the assembly process in an unstructured environment. This method learns to generate a search trajectory to quickly find a hole based on the contact state obtained from the force/torque data. It can also learn the stiffness needed to avoid excessive contact forces during assembly. To verify this proposed method, power connector assembly process was performed 200 times, and it was shown to have an assembly success rate of 100% in a translation error within ±4 mm and a rotation error within ±3.5°. Furthermore, it was verified that the assembly time was about 2.3 sec, including the search time of about 1 sec, which is faster than the previous methods.

Boundary and Reverse Attention Module for Lung Nodule Segmentation in CT Images (CT 영상에서 폐 결절 분할을 위한 경계 및 역 어텐션 기법)

  • Hwang, Gyeongyeon;Ji, Yewon;Yoon, Hakyoung;Lee, Sang Jun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.5
    • /
    • pp.265-272
    • /
    • 2022
  • As the risk of lung cancer has increased, early-stage detection and treatment of cancers have received a lot of attention. Among various medical imaging approaches, computer tomography (CT) has been widely utilized to examine the size and growth rate of lung nodules. However, the process of manual examination is a time-consuming task, and it causes physical and mental fatigue for medical professionals. Recently, many computer-aided diagnostic methods have been proposed to reduce the workload of medical professionals. In recent studies, encoder-decoder architectures have shown reliable performances in medical image segmentation, and it is adopted to predict lesion candidates. However, localizing nodules in lung CT images is a challenging problem due to the extremely small sizes and unstructured shapes of nodules. To solve these problems, we utilize atrous spatial pyramid pooling (ASPP) to minimize the loss of information for a general U-Net baseline model to extract rich representations from various receptive fields. Moreover, we propose mixed-up attention mechanism of reverse, boundary and convolutional block attention module (CBAM) to improve the accuracy of segmentation small scale of various shapes. The performance of the proposed model is compared with several previous attention mechanisms on the LIDC-IDRI dataset, and experimental results demonstrate that reverse, boundary, and CBAM (RB-CBAM) are effective in the segmentation of small nodules.

Development of Semantic Risk Breakdown Structure to Support Risk Identification for Bridge Projects

  • Isah, Muritala Adebayo;Jeon, Byung-Ju;Yang, Liu;Kim, Byung-Soo
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.245-252
    • /
    • 2022
  • Risk identification for bridge projects is a knowledge-based and labor-intensive task involving several procedures and stakeholders. Presently, risk information of bridge projects is unstructured and stored in different sources and formats, hindering knowledge sharing, reuse, and automation of the risk identification process. Consequently, there is a need to develop structured and formalized risk information for bridge projects to aid effective risk identification and automation of the risk management processes to ensure project success. This study proposes a semantic risk breakdown structure (SRBS) to support risk identification for bridge projects. SRBS is a searchable hierarchical risk breakdown structure (RBS) developed with python programming language based on a semantic modeling approach. The proposed SRBS for risk identification of bridge projects consists of a 4-level tree structure with 11 categories of risks and 116 potential risks associated with bridge projects. The contributions of this paper are threefold. Firstly, this study fills the gap in knowledge by presenting a formalized risk breakdown structure that could enhance the risk identification of bridge projects. Secondly, the proposed SRBS can assist in the creation of a risk database to support the automation of the risk identification process for bridge projects to reduce manual efforts. Lastly, the proposed SRBS can be used as a risk ontology that could aid the development of an artificial intelligence-based integrated risk management system for construction projects.

  • PDF

Development of Expert system for Plant Construction Project Management (플랜트 건설 공사를 위한 사업관리 전문가 시스템의 개발)

  • 김우주;최대우;김정수
    • Journal of Information Technology Application
    • /
    • v.2 no.1
    • /
    • pp.1-24
    • /
    • 2000
  • Project management in the Construction field inherently has more uncertainty and more risks relative to ones from other area. This is the very reason for why project management is recognized as the important task to construction companies. For getting better performance in the project management, we need a system that keeps the consistencies in a automatic or semi-automatic manner through the project management stages like as project definition stage, project planning stage, project design and implementation stage. But since the early stages such as definition and planning stages has many unstructured features and also are dependent to unique expertise or experience of a specific company, we have difficulty providing systematic support for the task of these stages. This kind of problem becomes harder to solve especially in the plant construction domain that is our target domain. Therefore, in this paper, we propose and also implement a systematic approach to resolve the problem mentioned for the early project management stages in the plant construction domain. The results of our approach can be used not only for the purpose of the early project management stages but also can be used automatically as an input to commercial project management tools for the middle project management stages. Because of doing in this way, the construction project can be consistently managed from the definition to implementation stage in a seamless manner. For achieving this purpose, we adopt knowledge based inference, CBR, and neural network as major methodologies and we also applied our approach to two real world cases, power plant and drainage treatment plant cases from a leading construction company in Korea. Since these two application cases showed us very successful results, we can say our approach was validated successfully to the plant construction area. Finally, we believe our approach will contribute to many project management problems from more broader construction area.

  • PDF

A Study on Improving Performance of the Deep Neural Network Model for Relational Reasoning (관계 추론 심층 신경망 모델의 성능개선 연구)

  • Lee, Hyun-Ok;Lim, Heui-Seok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.12
    • /
    • pp.485-496
    • /
    • 2018
  • So far, the deep learning, a field of artificial intelligence, has achieved remarkable results in solving problems from unstructured data. However, it is difficult to comprehensively judge situations like humans, and did not reach the level of intelligence that deduced their relations and predicted the next situation. Recently, deep neural networks show that artificial intelligence can possess powerful relational reasoning that is core intellectual ability of human being. In this paper, to analyze and observe the performance of Relation Networks (RN) among the neural networks for relational reasoning, two types of RN-based deep neural network models were constructed and compared with the baseline model. One is a visual question answering RN model using Sort-of-CLEVR and the other is a text-based question answering RN model using bAbI task. In order to maximize the performance of the RN-based model, various performance improvement experiments such as hyper parameters tuning have been proposed and performed. The effectiveness of the proposed performance improvement methods has been verified by applying to the visual QA RN model and the text-based QA RN model, and the new domain model using the dialogue-based LL dataset. As a result of the various experiments, it is found that the initial learning rate is a key factor in determining the performance of the model in both types of RN models. We have observed that the optimal initial learning rate setting found by the proposed random search method can improve the performance of the model up to 99.8%.

Privacy-Preserving Language Model Fine-Tuning Using Offsite Tuning (프라이버시 보호를 위한 오프사이트 튜닝 기반 언어모델 미세 조정 방법론)

  • Jinmyung Jeong;Namgyu Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.165-184
    • /
    • 2023
  • Recently, Deep learning analysis of unstructured text data using language models, such as Google's BERT and OpenAI's GPT has shown remarkable results in various applications. Most language models are used to learn generalized linguistic information from pre-training data and then update their weights for downstream tasks through a fine-tuning process. However, some concerns have been raised that privacy may be violated in the process of using these language models, i.e., data privacy may be violated when data owner provides large amounts of data to the model owner to perform fine-tuning of the language model. Conversely, when the model owner discloses the entire model to the data owner, the structure and weights of the model are disclosed, which may violate the privacy of the model. The concept of offsite tuning has been recently proposed to perform fine-tuning of language models while protecting privacy in such situations. But the study has a limitation that it does not provide a concrete way to apply the proposed methodology to text classification models. In this study, we propose a concrete method to apply offsite tuning with an additional classifier to protect the privacy of the model and data when performing multi-classification fine-tuning on Korean documents. To evaluate the performance of the proposed methodology, we conducted experiments on about 200,000 Korean documents from five major fields, ICT, electrical, electronic, mechanical, and medical, provided by AIHub, and found that the proposed plug-in model outperforms the zero-shot model and the offsite model in terms of classification accuracy.