The widespread adoption of Large Language Models (LLMs) in software development is transforming programming from a generative activity into one dominated by prompt engineering and AI-generated solution evaluation. This shift opens the pathway for new cognitive challenges that amplify existing decision-making biases or create entirely novel ones. One such type of challenge is cognitive biases, which are thinking patterns that lead people away from logical reasoning, often resulting in errors, poor decisions, or sub-optimal actions. Despite LLMs becoming integral to modern development workflows, we lack a systematic understanding of how cognitive biases manifest and impact developer decision-making in these AI-collaborative contexts. This paper presents the first comprehensive study of cognitive biases in LLM-assisted programming using a mixed-methods approach of observational studies with n=14 student and professional developers, followed by surveys with n=22 additional developers. First, we qualitatively analyze our data based on bias categorization in the traditional non-LLM workflow reported in prior work. Our findings suggest that the traditional software development biases are inadequate to explain/understand why LLM-related actions are more likely to be biased. Through systematic analysis of 239 cognitive bias types, we develop a novel taxonomy of 15 bias categories of 90 biases specific to developer-LLM interactions, validated with cognitive psychologists. We found that 48.8% of total programmer actions are biased, and programmer-LLM interactions included 56.4% of biased actions. Based on our survey analysis, we present practical tools and practices for programmers, along with recommendations for LLM-based code generation tool builders to help mitigate cognitive biases in human-AI programming.