Quantification of natural rates of groundwater recharge is imperative for efficient groundwater management. Although recharge is one of the most important components in the groundwater studies, it is one of the least understood, largely because recharge rates vary widely in space and time, and rates are difficult to directly measure. The goal of this study is to assess the performance of different machine learning (ML) methods for predicting spatiotemporal groundwater recharge within the Contiguous United States (CONUS), and compare with other data products such as linear regression and water table fluctuation methods. For this aim, multi-layer perception (MLP) and long short-term memory (LSTM) models were used. The LSTM model is a deep neural network model evolved from recurrent neural network (RNN) with an explicit memory cell, which makes this model efficient in predicting time-series associated with groundwater variations. The satellite data and field monitoring data include rainfall, groundwater levels, and soil properties, which were used as model input variables. The models’ performance was evaluated using the root mean squared error (RMSE) and coefficient of determination. The RMSE and the coefficient of determination were calculated between the predicted groundwater recharge values and other estimates of recharge values. Furthermore, the correlation between different input variables and groundwater recharge was evaluated at multiple wells to develop recharge estimates at the CONUS scale. The results of current study provide valuable information regarding the efficiency of utilizing ML methods for predicting groundwater recharge.