Formal specifications are critical for reasoning about the correctness of complex systems and for enabling runtime monitoring. While recent advances have focused on automatically \emph{learning} such specifications, the challenge of identifying meaningful and non-trivial ones from a large, noisy set of candidates remains largely unaddressed. In this position paper, we propose an approach for specification ranking: identifying the most critical specifications that merit the overall system correctness. We design a four-metric rating framework that quantifies the importance of formal specifications to the underlying system and leverages the reasoning capabilities of Large Language Models to rank learned specifications following our rating framework. We demonstrate the effectiveness of our approach on distributed system specifications learned by an automated tool for 11 open-source and 3 proprietary system benchmarks.