Performance is a general quality attribute of every software system that developers always want to improve. A performance fuzzer helps developers in this task by automatically generating inputs hitting performance bottlenecks. However, a developer must still manually localize the root causes of these bottlenecks. In this study, we perform grammar-based performance fuzzing on an example System Under Test (SUT), focusing on response time for determining problematic grammar constructs with the highest likelihood of causing bottlenecks. We show that replacing these constructs creates an average of 40.53x speedup on 24 bottleneck cases out of 50. Furthermore, avoiding the problematic constructs in the input generation provides an average of 1.46x speedup. These preliminary results suggest a measurable link between grammar constructs and performance bottlenecks, opening up the possibility of high-level categorization and analysis.