Large language models have rapidly taken over software development tools and are now being used to generate code, write documentation, and even fix GitHub issues. Despite their success, many studies across various fields of computer science have shown that these models often struggle to reason about code properties, such as its performance, security, etc. In this paper, we demonstrate the limitations of text-based learning for code properties and how structured code representations are more effective for understanding some code properties. We evaluate over several code benchmarks and demonstrate the limitations of the internal code representation within large language models.