Large language models (LLMs) have shown impressive capabilities in coding tasks, including code understanding and generation. However, these models are also susceptible to adversarial perturbations, such as case changes or whitespace modifications, which can severely affect their performance. This study investigates the impact of adversarial perturbations with preserved semantics on the performance of coding tasks with GPT3.5 and GPT4o models. In addition to evaluating individual perturbations, the research examines combined perturbation attacks, where multiple perturbations from different categories are applied together. While combined attacks showed only marginal overall improvement over individual ones, they demonstrated a synergistic effect in specific scenarios, exploiting complementary vulnerabilities in the models.