This study explores the implementation of an efficient golf cart navigation system using deep learning techniques based on camera footage. In constrained environments such as those encountered by golf carts, the accuracy and processing speed of object detection models are crucial. While conventional object detection models offer high accuracy, their computational complexity makes them unsuitable for real-time deployment in embedded systems. In this paper, we introduce Efficient DETR (eDETR), a lightweight object detection model specifically designed to address the challenges of limited computational resources in golf course navigation. eDETR is built upon the Detection Transformer (DETR) architecture and incorporates model compression techniques, including a non-parametric pooling-based encoder and knowledge distillation methods. These enhancements effectively reduce the model's computational complexity while maintaining high accuracy. Experimental results, obtained using a custom dataset collected from an actual golf course environment, demonstrate that eDETR, with the application of knowledge distillation, achieves an Average Precision (AP) of 66.6%, reflecting a 10% reduction compared to DETR. However, the model's computational cost is reduced by approximately 58%, and its inference time on a CPU is increased by nearly quadruple, making it well-suited for deployment in resource-constrained environments. These findings underscore the potential of eDETR as a practical model for real-time object detection in applications analogous to deep learning-based autonomous golf carts.