Context: This study explores how software professionals identify and address biases in AI systems within the software industry, focusing on practical knowledge and real-world applications. Goal: We aimed to understand the strategies used by practitioners to manage bias and their implications for fairness debt. Method: We employed a qualitative research method, gathering insights from industry professionals through interviews and analyzing the data thematically. Findings: Professionals identify biases through discrepancies in model outputs, inconsistencies across different demographics, and issues with training data. They address these biases with strategies such as improving data management, adjusting models, managing crises effectively, enhancing team diversity, and conducting ethical analysis. Conclusion: Our paper provides initial evidence on managing fairness debt and lays the foundation for developing structured guidelines to handle fairness-related issues in AI systems.