This talk covers governmental measures related to large language models (AI) and their potential implications for data privacy and national security. The discussion centers around concerns regarding how generative AI systems handle sensitive information, including data ingestion, storage, and potential vulnerabilities in their training sets. The conversation explores tech sovereignty and the importance of open-source AI systems like DeepSeek that provide widely accessible alternatives to proprietary models. The talk examined the broader implications of AI model deployment in government infrastructure, particularly focusing on data security protocols and the challenges of managing information flow across different AI architectures. The discussion emphasized how privately-owned monopolistic AI systems like US models, particularly those with close ties to authoritarian government structures, pose significant risks to democratic institutions and global tech sovereignty everywhere.