• 제목/요약/키워드: Neural Architecture Search

검색결과 43건 처리시간 0.028초

Path-Based Computation Encoder for Neural Architecture Search

  • Yang, Ying;Zhang, Xu;Pan, Hu
    • Journal of Information Processing Systems
    • /
    • 제18권2호
    • /
    • pp.188-196
    • /
    • 2022
  • Recently, neural architecture search (NAS) has received increasing attention as it can replace human experts in designing the architecture of neural networks for different tasks and has achieved remarkable results in many challenging tasks. In this study, a path-based computation neural architecture encoder (PCE) was proposed. Our PCE first encodes the computation of information on each path in a neural network, and then aggregates the encodings on all paths together through an attention mechanism, simulating the process of information computation along paths in a neural network and encoding the computation on the neural network instead of the structure of the graph, which is more consistent with the computational properties of neural networks. We performed an extensive comparison with eight encoding methods on two commonly used NAS search spaces (NAS-Bench-101 and NAS-Bench-201), which included a comparison of the predictive capabilities of performance predictors and search capabilities based on two search strategies (reinforcement learning-based and Bayesian optimization-based) when equipped with different encoders. Experimental evaluation shows that PCE is an efficient encoding method that effectively ranks and predicts neural architecture performance, thereby improving the search efficiency of neural architectures.

Robust architecture search using network adaptation

  • Rana, Amrita;Kim, Kyung Ki
    • 센서학회지
    • /
    • 제30권5호
    • /
    • pp.290-294
    • /
    • 2021
  • Experts have designed popular and successful model architectures, which, however, were not the optimal option for different scenarios. Despite the remarkable performances achieved by deep neural networks, manually designed networks for classification tasks are the backbone of object detection. One major challenge is the ImageNet pre-training of the search space representation; moreover, the searched network incurs huge computational cost. Therefore, to overcome the obstacle of the pre-training process, we introduce a network adaptation technique using a pre-trained backbone model tested on ImageNet. The adaptation method can efficiently adapt the manually designed network on ImageNet to the new object-detection task. Neural architecture search (NAS) is adopted to adapt the architecture of the network. The adaptation is conducted on the MobileNetV2 network. The proposed NAS is tested using SSDLite detector. The results demonstrate increased performance compared to existing network architecture in terms of search cost, total number of adder arithmetics (Madds), and mean Average Precision(mAP). The total computational cost of the proposed NAS is much less than that of the State Of The Art (SOTA) NAS method.

그래프 합성곱-신경망 구조 탐색 : 그래프 합성곱 신경망을 이용한 신경망 구조 탐색 (Graph Convolutional - Network Architecture Search : Network architecture search Using Graph Convolution Neural Networks)

  • 최수연;박종열
    • 문화기술의 융합
    • /
    • 제9권1호
    • /
    • pp.649-654
    • /
    • 2023
  • 본 논문은 그래프 합성곱 신경망을 이용한 신경망 구조 탐색 모델 설계를 제안한다. 딥 러닝은 블랙박스로 학습이 진행되는 특성으로 인해 설계한 모델이 최적화된 성능을 가지는 구조인지 검증하지 못하는 문제점이 존재한다. 신경망 구조 탐색 모델은 모델을 생성하는 순환 신경망과 생성된 네트워크인 합성곱 신경망으로 구성되어있다. 통상의 신경망 구조 탐색 모델은 순환신경망 계열을 사용하지만 우리는 본 논문에서 순환신경망 대신 그래프 합성곱 신경망을 사용하여 합성곱 신경망 모델을 생성하는 GC-NAS를 제안한다. 제안하는 GC-NAS는 Layer Extraction Block을 이용하여 Depth를 탐색하며 Hyper Parameter Prediction Block을 이용하여 Depth 정보를 기반으로 한 spatial, temporal 정보(hyper parameter)를 병렬적으로 탐색합니다. 따라서 Depth 정보를 반영하기 때문에 탐색 영역이 더 넓으며 Depth 정보와 병렬적 탐색을 진행함으로 모델의 탐색 영역의 목적성이 분명하기 때문에 GC-NAS대비 이론적 구조에 있어서 우위에 있다고 판단된다. GC-NAS는 그래프 합성곱 신경망 블록 및 그래프 생성 알고리즘을 통하여 기존 신경망 구조 탐색 모델에서 순환 신경망이 가지는 고차원 시간 축의 문제와 공간적 탐색의 범위 문제를 해결할 것으로 기대한다. 또한 우리는 본 논문이 제안하는 GC-NAS를 통하여 신경망 구조 탐색에 그래프 합성곱 신경망을 적용하는 연구가 활발히 이루어질 수 있는 계기가 될 수 있기를 기대한다.

Training-Free Hardware-Aware Neural Architecture Search with Reinforcement Learning

  • Tran, Linh Tam;Bae, Sung-Ho
    • 방송공학회논문지
    • /
    • 제26권7호
    • /
    • pp.855-861
    • /
    • 2021
  • Neural Architecture Search (NAS) is cutting-edge technology in the machine learning community. NAS Without Training (NASWOT) recently has been proposed to tackle the high demand of computational resources in NAS by leveraging some indicators to predict the performance of architectures before training. The advantage of these indicators is that they do not require any training. Thus, NASWOT reduces the searching time and computational cost significantly. However, NASWOT only considers high-performing networks which does not guarantee a fast inference speed on hardware devices. In this paper, we propose a multi objectives reward function, which considers the network's latency and the predicted performance, and incorporate it into the Reinforcement Learning approach to search for the best networks with low latency. Unlike other methods, which use FLOPs to measure the latency that does not reflect the actual latency, we obtain the network's latency from the hardware NAS bench. We conduct extensive experiments on NAS-Bench-201 using CIFAR-10, CIFAR-100, and ImageNet-16-120 datasets, and show that the proposed method is capable of generating the best network under latency constrained without training subnetworks.

그룹단위 후보 연산 선별을 사용한 자동화된 최적 신경망 구조 탐색: 후보 연산의 gradient 를 기반으로 (DG-DARTS: Operation Dropping Grouped by Gradient Differentiable Neural Architecture Search)

  • 박성진;송하윤
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2020년도 추계학술발표대회
    • /
    • pp.850-853
    • /
    • 2020
  • gradient decent 를 기반으로 한 Differentiable architecture search(DARTS)는 한 번의 Architecture Search 로 모든 후보 연산 중 가장 가중치가 높은 연산 하나를 선택한다. 이 때 비슷한 종류의 연산이 가중치를 나누어 갖는 "표의 분산"이 나타나, 성능이 더 좋은 연산이 선택되지 못하는 상황이 발생한다. 본 연구에서는 이러한 상황을 막기위해 Architecture Parameter 가중치의 gradient 를 기반으로 연산들을 클러스터링 하여 그룹화 한다. 그 후 그룹별로 가중치를 합산하여 높은 가중치를 갖는 그룹만을 사용하여 한 번 더 Architecture Search 를 진행한다. 각각의 Architecture Search 는 DARTS 의 절반 epoch 만큼 이루어지며, 총 epoch 이 같으나 두번째의 Architecture Search 는 선별된 연산 그룹을 사용하므로 DARTS 에 비해 더 적은 Search Cost 가 요구된다. "표의 분산"문제를 해결하고, 2 번으로 나뉜 Architecture Search 에 따라 CIFAR 10 데이터 셋에 대해 2.46%의 에러와 0.16 GPU-days 의 탐색시간을 얻을 수 있다.

물체 탐지에서 Neural Architecture Search 기반 Channel Pruning 을 통한 Parameter 수 대비 정확도 개선 (Improving Accuracy over Parameter through Channel Pruning based on Neural Architecture Search in Object Detection)

  • 노재현 ;유승현 ;손승욱 ;정용화
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 추계학술발표대회
    • /
    • pp.512-513
    • /
    • 2023
  • CNN 기반 Deep Learning 분야에서 객체 탐지 정확도를 높이기 위해 모델의 많은 Parameter 가 사용된다. 많은 Parameter 를 사용하게 되면 최소 하드웨어 성능 요구치가 상승하고 처리속도도 감소한다는 문제가 있어, 최소한의 정확도 하락으로 Parameter 를 줄이기 위한 여러 Pruning 기법이 사용된다. 본 연구에서는 Neural Architecture Search(NAS) 기반 Channel Pruning 인 Artificial Bee Colony(ABC) 알고리즘을 사용하였고, 기존 NAS 기반 Channel Pruning 논문들이 Classification Task 에서만 실험한 것과 달리 Object Detection Task 에서도 NAS 기반 Channel Pruning 을 적용하여 기존 Uniform Pruning 과 비교할 때 파라미터 수 대비 정확도가 개선됨을 확인하였다.

자동 기계학습(AutoML) 기술 동향 (Recent Research & Development Trends in Automated Machine Learning)

  • 문용혁;신익희;이용주;민옥기
    • 전자통신동향분석
    • /
    • 제34권4호
    • /
    • pp.32-42
    • /
    • 2019
  • The performance of machine learning algorithms significantly depends on how a configuration of hyperparameters is identified and how a neural network architecture is designed. However, this requires expert knowledge of relevant task domains and a prohibitive computation time. To optimize these two processes using minimal effort, many studies have investigated automated machine learning in recent years. This paper reviews the conventional random, grid, and Bayesian methods for hyperparameter optimization (HPO) and addresses its recent approaches, which speeds up the identification of the best set of hyperparameters. We further investigate existing neural architecture search (NAS) techniques based on evolutionary algorithms, reinforcement learning, and gradient derivatives and analyze their theoretical characteristics and performance results. Moreover, future research directions and challenges in HPO and NAS are described.

Neural Network Architecture Optimization and Application

  • Liu, Zhijun;Sugisaka, Masanori
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1999년도 제14차 학술회의논문집
    • /
    • pp.214-217
    • /
    • 1999
  • In this paper, genetic algorithm (GA) is implemented to search for the optimal structures (i.e. the kind of neural networks, the number of inputs and hidden neurons) of neural networks which are used approximating a given nonlinear function. Two kinds of neural networks, i.e. the multilayer feedforward [1] and time delay neural networks (TDNN) [2] are involved in this paper. The synapse weights of each neural network in each generation are obtained by associated training algorithms. The simulation results of nonlinear function approximation are given out and some improvements in the future are outlined.

  • PDF

Cross-architecture Binary Function Similarity Detection based on Composite Feature Model

  • Xiaonan Li;Guimin Zhang;Qingbao Li;Ping Zhang;Zhifeng Chen;Jinjin Liu;Shudan Yue
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권8호
    • /
    • pp.2101-2123
    • /
    • 2023
  • Recent studies have shown that the neural network-based binary code similarity detection technology performs well in vulnerability mining, plagiarism detection, and malicious code analysis. However, existing cross-architecture methods still suffer from insufficient feature characterization and low discrimination accuracy. To address these issues, this paper proposes a cross-architecture binary function similarity detection method based on composite feature model (SDCFM). Firstly, the binary function is converted into vector representation according to the proposed composite feature model, which is composed of instruction statistical features, control flow graph structural features, and application program interface calling behavioral features. Then, the composite features are embedded by the proposed hierarchical embedding network based on a graph neural network. In which, the block-level features and the function-level features are processed separately and finally fused into the embedding. In addition, to make the trained model more accurate and stable, our method utilizes the embeddings of predecessor nodes to modify the node embedding in the iterative updating process of the graph neural network. To assess the effectiveness of composite feature model, we contrast SDCFM with the state of art method on benchmark datasets. The experimental results show that SDCFM has good performance both on the area under the curve in the binary function similarity detection task and the vulnerable candidate function ranking in vulnerability search task.

공간 탐색 최적화 알고리즘을 이용한 K-Means 클러스터링 기반 다항식 방사형 기저 함수 신경회로망: 설계 및 비교 해석 (K-Means-Based Polynomial-Radial Basis Function Neural Network Using Space Search Algorithm: Design and Comparative Studies)

  • 김욱동;오성권
    • 제어로봇시스템학회논문지
    • /
    • 제17권8호
    • /
    • pp.731-738
    • /
    • 2011
  • In this paper, we introduce an advanced architecture of K-Means clustering-based polynomial Radial Basis Function Neural Networks (p-RBFNNs) designed with the aid of SSOA (Space Search Optimization Algorithm) and develop a comprehensive design methodology supporting their construction. In order to design the optimized p-RBFNNs, a center value of each receptive field is determined by running the K-Means clustering algorithm and then the center value and the width of the corresponding receptive field are optimized through SSOA. The connections (weights) of the proposed p-RBFNNs are of functional character and are realized by considering three types of polynomials. In addition, a WLSE (Weighted Least Square Estimation) is used to estimate the coefficients of polynomials (serving as functional connections of the network) of each node from output node. Therefore, a local learning capability and an interpretability of the proposed model are improved. The proposed model is illustrated with the use of nonlinear function, NOx called Machine Learning dataset. A comparative analysis reveals that the proposed model exhibits higher accuracy and superb predictive capability in comparison to some previous models available in the literature.