Chatbots are here to stay, and are going to be deployed in various application domains. Unfortunately, amongst them, there are safety-critical ones. Thus, we need a way to guarantee our chatbots will always behave as expected. In this paper, we propose RV4Rasa, a Runtime Verification framework that monitors whether a given chatbot diverts its expected behaviour, when the latter is formalised as an interaction protocol between the end-user and the chatbot. We present RV4Rasa, its engineering, and its instantiation to monitor chatbots implemented using the Rasa framework. After presenting RV4Rasa’s structure, we report experiments that we carried out in a simulated robotic scenario, where a chatbot is used to support the design of a factory workfloor.