Hydrological models process meteorological data such as precipitation to yield streamflow and other hydrological quantities as outputs. Model simulations need to be evaluated against observed hydrological quantities. Typically, streamflow is the only available observed output; therefore, simulated and observed streamflows are compared to evaluate a model. The model evaluation problem is complicated by the presence of dominantly epistemic errors in observed precipitation and streamflow. A model evaluation procedure has to be designed such that it accounts for these errors and avoids committing type-2 errors (rejecting a good model). Limits-of-acceptability (LoA) provides a suitable framework to address this problem. Recently, it has been shown that a machine learning (ML) algorithm called random forest (RF) can be used to develop LoAs over streamflow such that these LoAs account for both precipitation and streamflow uncertainty. A significant advantage of this method is that it can be used to evaluate models even at ungauged basins. In this study, this method was used to evaluate a hydrological model Sacramento Soil Moisture Accounting (SAC-SMA) in St. Joseph River Watershed (SJRW). Numerical experiments were carried out to test the suitability of this method for both gauged and ungauged locations. A total of 1 million parameter sets were drawn uniformly from the parameters space to simulate streamflow. A model was considered behavioral if the simulated streamflow values were within LoAs at most of the time steps. In addition, several streamflow signatures were used to constrain the model parameter space. This presentation will discuss the results of these experiments.