Research shows that analysts and developers consider privacy as an afterthought, which may lead to non-compliance and violation of user’s privacy. Most current approaches, however, focus on extracting legal requirements from the regulations and evaluating the compliance of software and processes with them. In this paper, we develop a novel approach based on chain-of-thought prompting (CoT), in-context-learning (ICL), and Large Language Models (LLMs) to extract privacy behaviors from various software documentation prior to software development and then generate privacy requirements in the format of user stories. Our results show most commonly used LMs, such as GPT-4o and Llama 3, can identify privacy behaviors, and generate privacy user stories with F1 scores exceeding 0.8. We also show that the performance of these models could be improved through parameter tuning. Our findings provide insight into using and optimizing LMs for generating privacy requirements given software documentation created prior to or throughout the software development lifecycle.