• Title/Summary/Keyword: 학습 데이터

Search Result 6,453, Processing Time 0.033 seconds

Detecting high-resolution usage status of individual parcel of land using object detecting deep learning technique (객체 탐지 딥러닝 기법을 활용한 필지별 조사 방안 연구)

  • Jeon, Jeong-Bae
    • Journal of Cadastre & Land InformatiX
    • /
    • v.54 no.1
    • /
    • pp.19-32
    • /
    • 2024
  • This study examined the feasibility of image-based surveys by detecting objects in facilities and agricultural land using the YOLO algorithm based on drone images and comparing them with the land category by law. As a result of detecting objects through the YOLO algorithm, buildings showed a performance of detecting objects corresponding to 96.3% of the buildings provided in the existing digital map. In addition, the YOLO algorithm developed in this study detected 136 additional buildings that were not located in the digital map. Plastic greenhouses detected a total of 297 objects, but the detection rate was low for some plastic greenhouses for fruit trees. Also, agricultural land had the lowest detection rate. This result is because agricultural land has a larger area and irregular shape than buildings, so the accuracy is lower than buildings due to the inconsistency of training data. Therefore, segmentation detection, rather than box-shaped detection, is likely to be more effective for agricultural fields. Comparing the detected objects with the land category by law, it was analyzed that some buildings exist in agricultural and forest areas where it is difficult to locate buildings. It seems that it is necessary to link with administrative information to understand that these buildings are used illegally. Therefore, at the current level, it is possible to objectively determine the existence of buildings in fields where it is difficult to locate buildings.

Nondestructive Quantification of Corrosion in Cu Interconnects Using Smith Charts (스미스 차트를 이용한 구리 인터커텍트의 비파괴적 부식도 평가)

  • Minkyu Kang;Namgyeong Kim;Hyunwoo Nam;Tae Yeob Kang
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.31 no.2
    • /
    • pp.28-35
    • /
    • 2024
  • Corrosion inside electronic packages significantly impacts the system performance and reliability, necessitating non-destructive diagnostic techniques for system health management. This study aims to present a non-destructive method for assessing corrosion in copper interconnects using the Smith chart, a tool that integrates the magnitude and phase of complex impedance for visualization. For the experiment, specimens simulating copper transmission lines were subjected to temperature and humidity cycles according to the MIL-STD-810G standard to induce corrosion. The corrosion level of the specimen was quantitatively assessed and labeled based on color changes in the R channel. S-parameters and Smith charts with progressing corrosion stages showed unique patterns corresponding to five levels of corrosion, confirming the effectiveness of the Smith chart as a tool for corrosion assessment. Furthermore, by employing data augmentation, 4,444 Smith charts representing various corrosion levels were obtained, and artificial intelligence models were trained to output the corrosion stages of copper interconnects based on the input Smith charts. Among image classification-specialized CNN and Transformer models, the ConvNeXt model achieved the highest diagnostic performance with an accuracy of 89.4%. When diagnosing the corrosion using the Smith chart, it is possible to perform a non-destructive evaluation using electronic signals. Additionally, by integrating and visualizing signal magnitude and phase information, it is expected to perform an intuitive and noise-robust diagnosis.

Class-Agnostic 3D Mask Proposal and 2D-3D Visual Feature Ensemble for Efficient Open-Vocabulary 3D Instance Segmentation (효율적인 개방형 어휘 3차원 개체 분할을 위한 클래스-독립적인 3차원 마스크 제안과 2차원-3차원 시각적 특징 앙상블)

  • Sungho Song;Kyungmin Park;Incheol Kim
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.7
    • /
    • pp.335-347
    • /
    • 2024
  • Open-vocabulary 3D point cloud instance segmentation (OV-3DIS) is a challenging visual task to segment a 3D scene point cloud into object instances of both base and novel classes. In this paper, we propose a novel model Open3DME for OV-3DIS to address important design issues and overcome limitations of the existing approaches. First, in order to improve the quality of class-agnostic 3D masks, our model makes use of T3DIS, an advanced Transformer-based 3D point cloud instance segmentation model, as mask proposal module. Second, in order to obtain semantically text-aligned visual features of each point cloud segment, our model extracts both 2D and 3D features from the point cloud and the corresponding multi-view RGB images by using pretrained CLIP and OpenSeg encoders respectively. Last, to effectively make use of both 2D and 3D visual features of each point cloud segment during label assignment, our model adopts a unique feature ensemble method. To validate our model, we conducted both quantitative and qualitative experiments on ScanNet-V2 benchmark dataset, demonstrating significant performance gains.

A Study on Impact of Self-Service Technology on Library Kiosk Service Satisfaction and Usage Intention: Toward a Task-Technology Fit Model (셀프서비스 기술이 도서관 키오스크 서비스 만족과 이용의도에 미치는 영향 연구: 과업-기술 적합성 모델을 중심으로)

  • Jun Kyu Keum;Jee Yeon Lee
    • Journal of the Korean Society for information Management
    • /
    • v.41 no.3
    • /
    • pp.1-32
    • /
    • 2024
  • This study aims to explore the utilization of kiosks, a case of self-service technology in library services, by applying task-technology fit theory to reveal the factors that affect the satisfaction and continued use of library kiosk services and to conduct a review of library non-face-to-face services. We organized the kiosk characteristic factors through a literature review and established a research model mediated by related theories. We collected 229 valid questionnaire data from users with experience using library kiosks and analyzed them using SPSS 26.0 and SmartPLS 4.0 programs. The analysis results confirmed that the fit of library services and self-service technology was significantly influenced by the usefulness and enjoyment of kiosk technology characteristics and the kiosk-friendly environment of the usage environment attributes. In addition, we found the fit between library services and self-service technology to significantly affect library kiosk usage satisfaction and intention to continue using the kiosk, so this study proposed a plan for library kiosk services utilizing the significant factors. In addition, to effectively use the kiosks as a non-face-to-face library service, we suggest operating them in line to provide library information materials, install them in various locations within the library to increase accessibility, and provide education on how to use them for learning and to raise positive awareness of the kiosks for the digitally disadvantaged.

Legal Issues and Regulatory Discussions in Generative AI (생성형 AI의 법적 문제와 규제 논의 동향)

  • Kim, Beop-Yeon
    • Informatization Policy
    • /
    • v.31 no.3
    • /
    • pp.3-33
    • /
    • 2024
  • This paper summarizes the legal problems and issues raised in relation to generative AI. In addition, we looked at what regulatory discussions individual countries or international organizations have in order to solve or respond to these issues or to minimize the risks posed by generative AI. Infringement of individual basic rights raised by generative AI, the emergence and control of new crimes, monopolization of specific markets and environmental issues are mainly discussed, and although there are some differences in the necessity and direction of regulation, most countries seem to have similar views. Regarding AI, the issues that are currently being raised have been discussed continuously from the beginning of its appearance. Although certain issues have been discussed relatively much, there are some differences between countries, and situations that require consideration of phenomena different from the past are emerging. It seems that regulations and policies are being refined according to the situation of individual countries. In a situation where various issues are rapidly emerging and changing, measures to minimize the risk of AI and to enjoy the utility and benefits of AI through the use of safe AI should be sought. It will be necessary to continuously identify and analyze international trends and reorganize AI-related regulations and detailed policies suitable for Korea.

Development and Application of a Scenario Analysis System for CBRN Hazard Prediction (화생방 오염확산 시나리오 분석 시스템 구축 및 활용)

  • Byungheon Lee;Jiyun Seo;Hyunwoo Nam
    • Journal of the Korea Society for Simulation
    • /
    • v.33 no.3
    • /
    • pp.13-26
    • /
    • 2024
  • The CBRN(Chemical, Biological, Radiological, and Nuclear) hazard prediction model is a system that supports commanders in making better decisions by creating contamination distribution and damage prediction areas based on the weapons used, terrain, and weather information in the events of biochemical and radiological accidents. NBC_RAMS(Nuclear, Biological and Chemical Reporting And Modeling S/W System) developed by ADD (Agency for Defense Development) is used not only supporting for decision making plan for various military operations and exercises but also for post analyzing CBRN related events. With the NBC_RAMS's core engine, we introduced a CBR hazard assessment scenario analysis system that can generate contaminant distribution prediction results reflecting various CBR scenarios, and described how to apply it in specific purposes in terms of input information, meteorological data, land data with land coverage and DEM, and building data with pologon form. As a practical use case, a technology development case is addressed that tracks the origin location of contaminant source with artificial intelligence and a technology that selects the optimal location of a CBR detection sensor with score data by analyzing large amounts of data generated using the CBRN scenario analysis system. Through this system, it is possible to generate AI-specialized CBRN related to training and analysis data and support planning of operation and exercise by predicting battle field.

3D Object Detection via Multi-Scale Feature Knowledge Distillation

  • Se-Gwon Cheon;Hyuk-Jin Shin;Seung-Hwan Bae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.10
    • /
    • pp.35-45
    • /
    • 2024
  • In this paper, we propose Multi-Scale Feature Knowledge Distillation for 3D Object Detection (M3KD), which extracting knowledge from the teacher model, and transfer to the student model consider with multi-scale feature map. To achieve this, we minimize L2 loss between feature maps at each pyramid level of the student model with the correspond teacher model so student model can mimic the teacher model backbone information which improves the overall accuracy of the student model. We apply the class logits knowledge distillation used in the image classification task, by allowing student model mimic the classification logits of the teacher model, to guide the student model to improve the detection accuracy. In KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) dataset, our M3KD (Multi-Scale Feature Knowledge Distillation for 3D Object Detection) student model achieves 30% inference speed improvement compared to the teacher model. Additionally, our method achieved an average improvement of 1.08% in 3D mean Average Precision (mAP) across all classes and difficulty levels compared to the baseline student model. Furthermore, when integrated with the latest knowledge distillation methods such as PKD and SemCKD, our approach achieved an additional 0.42% and 0.52% improvement in 3D mAP, respectively, further enhancing performance.

Tracing the Development and Spread Patterns of OSS using the Method of Netnography - The Case of JavaScript Frameworks - (네트노그라피를 이용한 공개 소프트웨어의 개발 및 확산 패턴 분석에 관한 연구 - 자바스크립트 프레임워크 사례를 중심으로 -)

  • Kang, Heesuk;Yoon, Inhwan;Lee, Heesan
    • Management & Information Systems Review
    • /
    • v.36 no.3
    • /
    • pp.131-150
    • /
    • 2017
  • The purpose of this study is to observe the spread pattern of open source software (OSS) while establishing relations with surrounding actors during its operation period. In order to investigate the change pattern of participants in the OSS, we use a netnography on the basis of online data, which can trace the change patterns of the OSS depending on the passage of time. For this, the cases of three OSSs (e.g. jQuery, MooTools, and YUI), which are JavaScript frameworks, were compared, and the corresponding data were collected from the open application programming interface (API) of GitHub as well as blog and web searches. This research utilizes the translation process of the actor-network theory to categorize the stages of the change patterns on the OSS translation process. In the project commencement stage, we identified the type of three different OSS-related actors and defined associated relationships among them. The period, when a master commences a project at first, is refined through the course for the maintenance of source codes with persons concerned (i.e. project growth stage). Thereafter, the period when the users have gone through the observation and learning period by being exposed to promotion activities and codes usage respectively, and becoming to active participants, is regarded as the 'leap of participants' stage. Our results emphasize the importance of promotion processes in participants' selection of the OSS for participation and confirm the crowding-out effect that the rapid speed of OSS development retarded the emergence of participants.

  • PDF

Predictive Clustering-based Collaborative Filtering Technique for Performance-Stability of Recommendation System (추천 시스템의 성능 안정성을 위한 예측적 군집화 기반 협업 필터링 기법)

  • Lee, O-Joun;You, Eun-Soon
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.119-142
    • /
    • 2015
  • With the explosive growth in the volume of information, Internet users are experiencing considerable difficulties in obtaining necessary information online. Against this backdrop, ever-greater importance is being placed on a recommender system that provides information catered to user preferences and tastes in an attempt to address issues associated with information overload. To this end, a number of techniques have been proposed, including content-based filtering (CBF), demographic filtering (DF) and collaborative filtering (CF). Among them, CBF and DF require external information and thus cannot be applied to a variety of domains. CF, on the other hand, is widely used since it is relatively free from the domain constraint. The CF technique is broadly classified into memory-based CF, model-based CF and hybrid CF. Model-based CF addresses the drawbacks of CF by considering the Bayesian model, clustering model or dependency network model. This filtering technique not only improves the sparsity and scalability issues but also boosts predictive performance. However, it involves expensive model-building and results in a tradeoff between performance and scalability. Such tradeoff is attributed to reduced coverage, which is a type of sparsity issues. In addition, expensive model-building may lead to performance instability since changes in the domain environment cannot be immediately incorporated into the model due to high costs involved. Cumulative changes in the domain environment that have failed to be reflected eventually undermine system performance. This study incorporates the Markov model of transition probabilities and the concept of fuzzy clustering with CBCF to propose predictive clustering-based CF (PCCF) that solves the issues of reduced coverage and of unstable performance. The method improves performance instability by tracking the changes in user preferences and bridging the gap between the static model and dynamic users. Furthermore, the issue of reduced coverage also improves by expanding the coverage based on transition probabilities and clustering probabilities. The proposed method consists of four processes. First, user preferences are normalized in preference clustering. Second, changes in user preferences are detected from review score entries during preference transition detection. Third, user propensities are normalized using patterns of changes (propensities) in user preferences in propensity clustering. Lastly, the preference prediction model is developed to predict user preferences for items during preference prediction. The proposed method has been validated by testing the robustness of performance instability and scalability-performance tradeoff. The initial test compared and analyzed the performance of individual recommender systems each enabled by IBCF, CBCF, ICFEC and PCCF under an environment where data sparsity had been minimized. The following test adjusted the optimal number of clusters in CBCF, ICFEC and PCCF for a comparative analysis of subsequent changes in the system performance. The test results revealed that the suggested method produced insignificant improvement in performance in comparison with the existing techniques. In addition, it failed to achieve significant improvement in the standard deviation that indicates the degree of data fluctuation. Notwithstanding, it resulted in marked improvement over the existing techniques in terms of range that indicates the level of performance fluctuation. The level of performance fluctuation before and after the model generation improved by 51.31% in the initial test. Then in the following test, there has been 36.05% improvement in the level of performance fluctuation driven by the changes in the number of clusters. This signifies that the proposed method, despite the slight performance improvement, clearly offers better performance stability compared to the existing techniques. Further research on this study will be directed toward enhancing the recommendation performance that failed to demonstrate significant improvement over the existing techniques. The future research will consider the introduction of a high-dimensional parameter-free clustering algorithm or deep learning-based model in order to improve performance in recommendations.

An Integrated Model based on Genetic Algorithms for Implementing Cost-Effective Intelligent Intrusion Detection Systems (비용효율적 지능형 침입탐지시스템 구현을 위한 유전자 알고리즘 기반 통합 모형)

  • Lee, Hyeon-Uk;Kim, Ji-Hun;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.125-141
    • /
    • 2012
  • These days, the malicious attacks and hacks on the networked systems are dramatically increasing, and the patterns of them are changing rapidly. Consequently, it becomes more important to appropriately handle these malicious attacks and hacks, and there exist sufficient interests and demand in effective network security systems just like intrusion detection systems. Intrusion detection systems are the network security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. Conventional intrusion detection systems have generally been designed using the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. However, they cannot handle new or unknown patterns of the network attacks, although they perform very well under the normal situation. As a result, recent studies on intrusion detection systems use artificial intelligence techniques, which can proactively respond to the unknown threats. For a long time, researchers have adopted and tested various kinds of artificial intelligence techniques such as artificial neural networks, decision trees, and support vector machines to detect intrusions on the network. However, most of them have just applied these techniques singularly, even though combining the techniques may lead to better detection. With this reason, we propose a new integrated model for intrusion detection. Our model is designed to combine prediction results of four different binary classification models-logistic regression (LOGIT), decision trees (DT), artificial neural networks (ANN), and support vector machines (SVM), which may be complementary to each other. As a tool for finding optimal combining weights, genetic algorithms (GA) are used. Our proposed model is designed to be built in two steps. At the first step, the optimal integration model whose prediction error (i.e. erroneous classification rate) is the least is generated. After that, in the second step, it explores the optimal classification threshold for determining intrusions, which minimizes the total misclassification cost. To calculate the total misclassification cost of intrusion detection system, we need to understand its asymmetric error cost scheme. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, total misclassification cost is more affected by FNE rather than FPE. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 10,000 samples from them by using random sampling method. Also, we compared the results from our model with the results from single techniques to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell R4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on GA outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that the proposed model outperformed all the other comparative models in the total misclassification cost perspective. Consequently, it is expected that our study may contribute to build cost-effective intelligent intrusion detection systems.