The growing number of (cloud) applications and devices massively increases the communication rate and volume pushing integration systems to their (throughput) limits. While the usage of modern hardware like Field Programmable Gate Arrays (FPGAs) led to low latency when employed for query and event processing, application integration adds yet unexplored processing opportunities. In this industry paper, we explore how to program integration semantics (e. g., message routing and transformation) in form of Enterprise Integration Patterns (EIP) on top of an FPGA, thus complementing the existing research on FPGA data processing. We focus on message routing, re-define the EIP for stream processing and propose modular hardware implementations as templates that are synthesized to circuits. For our real-world “connected car” scenario (i. e., composed patterns), we discuss common and new optimizations especially relevant for hardware integration processes. Our experimental evaluation shows competitive throughput compared to modern general-purpose CPUs and discusses the results.