Staff Security Engineer (AI Security) @ Box Inc.
Who you are:
- Experienced security engineer with 5+ years in application security, DevSecOps, or security tooling, ideally with exposure to AI/ML security challenges.
- Deep understanding of AI agent architectures, generative AI models, and associated security risks such as prompt injection, adversarial attacks, and autonomous decision-making vulnerabilities.
- Proven track record implementing security tools and automation (SAST, DAST, SCA, API security scanning) integrated into CI/CD pipelines at scale.
- Experience with or strong interest in applying LLMs to security use cases, such as code analysis, vulnerability detection, or security documentation.
- Demonstrated ability to translate security requirements into practical AI applications that enhance the secure development lifecycle.
- Skilled in threat modeling methodologies and able to adapt traditional frameworks to dynamic AI systems.
- Proficient in at least one scripting language (e.g. Python) and familiar with multiple programming languages, cloud-native environments and container security.
- Strong communicator capable of articulating complex AI security concepts to both technical and non-technical stakeholders.
- Passionate about cybersecurity innovation, with active participation in security communities, conferences, CTFs, bug bounty programs, or CVE submissions preferred.
- Growth mindset with a proactive approach to learning and problem-solving in fast-evolving technology landscapes.
- Preferred Skills:
- Experience working with Security Architecture patterns and context-aware access control mechanisms.
- Background in adversarial machine learning or AI robustness testing.
- Contributions to open source AI security projects or research publications in AI safety/security.
- Experience building or working with LLM-powered developer tools or security automation.
- Knowledge of prompt engineering techniques to optimize LLM outputs for security applications.
- Understanding of the limitations of current LLM technologies and strategies to mitigate false positives/negatives in security contexts.
**Our compensation structure is the base salary and equity in the form of restricted stock units.
What is Box?
Box (NYSE:BOX) is the leader in Intelligent Content Management. Our platform enables organizations to fuel collaboration, manage the entire content lifecycle, secure critical content, and transform business workflows with enterprise AI. We help companies thrive in the new AI-first era of business. Founded in 2005, Box simplifies work for leading global organizations, including AstraZeneca, JLL, Morgan Stanley, and Nationwide. Box is headquartered in Redwood City, CA, with offices across the United States, Europe, and Asia.
By joining Box, you will have the unique opportunity to continue driving our platform forward. Content powers how we work. It’s the billions of files and information flowing across teams, departments, and key business processes every single day: contracts, invoices, employee records, financials, product specs, marketing assets, and more. Our mission is to bring intelligence to the world of content management and empower our customers to completely transform workflows across their organizations. With the combination of AI and enterprise content, the opportunity has never been greater to transform how the world works together and at Box you will be on the front lines of this massive shift.
Why Box needs you:
We are seeking a highly skilled and visionary Staff Security Engineer to lead the security strategy and implementation for Generative AI and Agentic AI technologies within Boxs platform. You will be instrumental in designing, developing, and operationalizing security controls that address the novel risks introduced by autonomous AI agents and generative models. Additionally, you will drive strategic initiatives to leverage LLMs to enhance our secure development lifecycle. Your work will ensure that Box remains a trusted leader in AI-powered content management by embedding security-by-design principles into all AI features and tooling.
Percentage of Time Spent:
- 40% building the AI Security program
- 30-40% leading a strategy for building capabilities of generative AI
- 20-30% partnership with the Engineering Teams
Box lives its values, with community and in-person collaboration being a core part of our culture. Boxers are expected to work from their assigned office a minimum of 3 days per week. Your Recruiter will share more about how we work and company culture during the hiring process.
At Box, we believe unique and diverse experiences benefit our culture, our products, our customers, our company, and our world. We aim to recruit a passionate, high-performing workforce that reflects the world we live in. If you are head-over-heels about this role but unsure if you meet all the requirements, we encourage you to apply!
,[Lead the design and implementation of security architectures specifically tailored for Generative AI and Agentic AI systems, including agentic identity models, least privilege access, runtime guardrails, and audit logging., Develop threat modeling approaches adapted for dynamic, non-deterministic AI agent behaviors, identifying autonomy-related risks such as prompt injection, tool misuse, agent impersonation, and multi-agent system attacks., Build and integrate advanced security tooling and automation to detect, prevent, and respond to AI-specific vulnerabilities across the development lifecycle, including adversarial testing frameworks for AI agents., Spearhead the strategy for integrating LLMs into the secure development lifecycle, including code review automation, vulnerability detection, and security documentation generation., Design and implement AI-powered security tools that can analyze code, identify potential vulnerabilities, and recommend secure coding patterns at scale., Lead proof-of-concept initiatives to demonstrate how generative AI can improve security posture through automated threat modeling, security testing, and developer education., Collaborate closely with product, engineering, and compliance teams to embed secure-by-default configurations and user consent checkpoints for sensitive AI actions involving PII, PHI, or critical business decisions., Drive continuous improvement of AI security posture by researching emerging attack vectors like model poisoning, untrusted code execution, and supply chain risks related to open-source AI frameworks., Mentor and guide other engineers on secure AI development practices and contribute to organizational knowledge sharing around AI risk mitigation strategies.] Requirements: Security, SDLC, Automation, Cloud, LLM, AI Security
Kategorie
artificialIntelligence
- Podrobné informace o nabídce práce
Firma: Box Inc. Lokalita: Práce v Polsku Odvětví práce: artificialIntelligence Pracovní pozice: Staff Security Engineer (AI Security) @ Box Inc. Směnnost práce fulltime - 40 hours per week Nástup do práce od: IHNED Nabízená mzda: neuvedeno Nabídka přidána: 9. 8. 2025
Pracovní pozice aktivní
Práce Staff Security Engineer (AI Security) @ Box Inc.: Často kladené otázky
👉 V jakém městě se nabízí nabídka práce Staff Security Engineer (AI Security) @ Box Inc.?
Práce je nabízena v lokalitě Warsaw.
👉 Jaká firma nabírá na tuto pozici?
Tato nabídka práce je do firmy Box Inc..
Pokud hledáte další podobné nabídky práce, podívejte se na aktuální pracovní místa Warsaw - artificialIntelligence