Security Engineer, Product Security

DeepMind

hiring-jobs.com

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

 

Snapshot

We’re looking for a Security Engineer who is interested in helping us develop and deploy next-generation AI safely and securely. 

About Us

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

The Role

We are a growing global security and privacy research and engineering group with a diverse set of skills supporting the organisation’s wider mission, objectives and priorities. The group plays a critical role in this mission in two main ways:

  1. Researching hard problems in AI security and privacy to stay ahead of adversaries, and
  2. Engineering scalable platforms and infrastructure the enable the safe and secure development and deployment of AI.

As an early member of the broader team, you will play a critical role in decisions that will have a significant impact on the Google DeepMind mission.

Google DeepMind relies on a multitude of systems, platforms and tools to run experiments, train models, and uncover new breakthroughs. 

As a Security Engineer you will collaborate with Alphabet and Google DeepMind engineers and researchers to identify threats and implement remediations that minimise product security risks from emerging threats to AI systems, and the supporting platforms, infrastructure, and data. 

Your job is to work across an entire AI tech stack to enable our researchers to make ever faster progress towards our mission with a keen focus on security and privacy. You will look at all software components that may have an attack surface. The role requires implementing both tested and novel approaches to minimise risks. This work requires working with researchers to understand their unique challenges, exploring potential solutions, and implementing fixes to achieve secure designs for both present and future innovations.

Key responsibilities:

  • Establish Security Controls: Design, develop, and implement security controls to safeguard systems, networks, and data, ensuring robust defence against both emerging and established threats. This includes code reviews, threat modelling, light-weight testing, etc.
  • Consistent Implementation: Ensure the consistent application and enforcement of security controls across all relevant systems and environments through automation and tooling, in order to maintain a high standard of protection.
  • Collaboration with Research: Work closely with research and development teams to innovate, create, and help implement both established and novel security solutions. This effort will help address both existing and emerging threats and vulnerabilities.
  • Automated Evaluation: Develop and implement automated processes to pragmatically assess the effectiveness of security controls, ensuring continuous and quick identification of issues and trajectories.
  • Metrics & Reporting: Produce clear, actionable metrics to track the performance and impact of security controls, helping drive data-driven improvements.
  • Team Collaboration: Foster a positive, respectful, and collaborative team environment, where everyone contributes to achieving security goals, emphasizing Google DeepMind’s culture of responsible innovation.

About You

In order to set you up for success as a Software Engineer at Google DeepMind,  we look for the following skills and experience:

  • Bachelor’s degree or higher in Computer Science, a related technical field, or equivalent practical experience
  • Software development experience with C++ and/or Python
  • An understanding of software vulnerabilities, their discovery, prevention, and controls appropriate to their remediation relative to performance, reliability, and other mission critical objectives
  • Experience with developing platforms and infrastructure for machine learning systems, and how machine learning pipelines operate
  • Interest in AI/ML security and privacy, with an active mindset on personal growth and self-improvement
  • Ability to influence and empathize with the needs of others, and to foster a positive, collaborative environment

In addition, the following would be an advantage: 

  • You understand how to build secure systems. You have worked on software projects delivering mission critical systems with a high-bar for security, and understand the importance of security being part of the design from the start. 
  • Decompose non-standard and novel systems that provide common infrastructure services, and can understand them quickly 
  • Secure software architecture and design to ensure critical systems are protected from internal and external risks
  • Data security and privacy methods (eg. differential privacy, federated learning, confidential computing, etc.) to ensure sensitive data is protected end-end.
  • Defensive programming to ensure the continued functioning of a piece of software under unforeseen circumstances.
  • Fuzzing software using appropriate tools and techniques to find security issues.

The US base salary range for this full-time position is between $189,000 – $300,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.

Application deadline: Monday 4th October 2024

Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy.

Apply now
To help us track our recruitment effort, please indicate in your cover/motivation letter where (hiring-jobs.com) you saw this job posting.

Job Location