When breakdowns occur during a human-chatbot conversation, the lack of transparency and the “black-box” nature of task-oriented chatbots can make it difficult for end-users to understand what went wrong and why. Inspired by recent HCI research on explainable AI solutions, we explore the design space of explainable chatbot interfaces through ChatrEx. We designed two novel in-application chatbot interfaces (ChatrEx-VINC and ChatrEx-VST) that provide visual example-based step-by-step explanations about the underlying working of a chatbot during a breakdown. We implemented these chatbots for complex spreadsheet tasks and conducted an observational study (N=14) to compare our designs with current state-of-the-art chatbot interfaces and assessed their strengths and weaknesses. We found that visual explanations in both ChatrEx-VINC and ChatrEx-VST enhanced users’ understanding of the reasons for breakdown and improved their perceptions of usefulness, transparency and trust during conversational breakdowns. We identify several opportunities for future HCI research to exploit explainable chatbot interfaces and better support human-chatbot interaction.