Cyber-Physical Systems (CPS) often leverage Reinforcement Learning (RL) techniques to adapt dynamically to changing environments and optimize performance. However, due to its inherent uncertainty, it is challenging to construct safety cases for RL components. To alleviate this problem, we propose the SAFE-RL (Safety and Accountability Framework for Evaluating Reinforcement Learning) for supporting the development, validation, and safe deployment of RL-based CPS. We adopt a design science approach to construct the framework and demonstrate its use in three RL applications in Small Autonomous Vehicles, showcasing its application to real-world RL systems.