Sign language is crucial for communication among individuals with hearing or speech impairments. Automated recognition systems are essential for learning and translating different sign language variants. However, these systems often face high computational demands and large memory footprints, limiting their use in real-time and resource-constrained environments. This research develops an optimized pipeline for American Sign Language (ASL) recognition, comparing Binarized Neural Networks (BNNs) with traditional full-precision neural networks. Using Larq, a library for training binarized models, we leverage BNNs' reduced memory and computational needs, suitable for embedded systems and edge devices. The study uses a dataset of ASL alphabet images, applying data augmentation to address data imbalance and occlusions. Both binarized and traditional models are trained and evaluated on accuracy, precision, recall, F1-score, memory footprints, and inference times. Results show that BNNs offer competitive performance with significantly lower computational requirements, demonstrating their potential for efficient and accessible ASL recognition systems.