The large-scale adoption of systems that automate classifications using Machine Learning (ML) algorithms to support or make decisions with profound consequences for human beings raises pressing challenges. It is important to understand how users’ trust is affected by ML models’ suggestions, even when they are wrong. Much of the research so far has focused on the user’s ability to interpret what a model has learned. In this work, we seek to understand another aspect of ML interpretability: how the presence of classification probabilities affect users’ trust in the model outcomes, especially in ambiguous scenarios. To this end, we conducted an online survey in which we asked participants to evaluate their agreement with an automatic classification made by an ML model before and after presenting them the model classification probabilities. Surprisingly, we found that, in ambiguous scenarios, respondents agreed more with incorrect model outcomes than with correct ones, requiring further analyses.