META AI Risk and Security Lead

  • Full-time

Company Description

Twitter is looking for an experienced individual to lead a new practice within META to conduct risk assessments and audits of our algorithmic systems at Twitter. The Machine Learning Ethics, Transparency & Accountability research team, known as META, sits within Cortex, Twitter’s centralised Machine Learning organisation, where it is responsible for investigating the fairness and transparency of Twitter’s automated algorithmic decision making system and addressing the harms they may introduce.

We believe in the power of bringing multiple perspectives together. The Machine Learning Ethics, Transparency & Accountability research team operates at the intersection of Machine Learning, Social Science, Policy, Legal and User Research in collaboration with numerous partners from across Twitter.

Job Description

As AI Risk and Security Lead at Twitter META, you will be growing a new practice within the company. The team’s remit is to conduct technical audits of our models at Twitter to ensure we proactively identify and address algorithmic harms. This team is new, and the right lead will be able to deliver the following: 

  • Standardized and thorough sociotechnical assessments for our models at Twitter

  • Successful collaboration with ML developer teams to conduct and complete assessments in a timely manner

  • Document and catalog ethical issues discovered during assessments

  • Ensure identified risks and harms are appropriately addressed

  • Research and apply new and novel tactics, techniques, and procedures that are applicable to ML ethics considerations

  • Collaborate with META engineers to build tooling to automate portions of assessments

  • Identify and hire the right team  

  • Author an annual model risk report

In addition, the lead will be working with META’s Director, Dr. Rumman Chowdhury, to build our bias bounty program. This role holds significant opportunities to shape the Responsible ML industry.

Qualifications

We’re looking for an experienced technical leader with domain expertise in ML. You should have both a solid technical understanding of risk assessment or audit work, either in ML Ethics or a related field. You also have experience building and growing a team. ML Ethics as an applied industry is in its developmental infancy. A successful candidate is comfortable with developing new and revolutionary products, sometimes the first in industry. 

As the Machine Learning Ethics, Transparency & Accountability research team is a part of our centralised Machine Learning organisation, you should also be experienced with cross-functional collaboration with product leaders, ML Engineers, legal, risk & compliance and related functions.

If this sounds like it might be you, you probably have:

  • Technical expertise in Machine Learning, including expertise in ML ethics, fairness & transparency 

  • A validated ability to collaborate cross-functionally with distributed teams.

  • Experience in applied audit or assessment work in algorithmic systems or in a related field

  • Excellent interpersonal skills which allow you to excel at being an advocate for your teams and charter across Twitter

  • A demonstrated ability to simplify and present technical concepts to non-technical partners to explain impact and drive support

  • A strong growth mentality

 

Requirements:

  • 2+ years of prior audit or assessment experience within the field of Machine Learning ethics, fairness & transparency, or a related industry

  • 8-10 years of ML experience

  • Outstanding written and verbal communication skills.

Additional Information

All your information will be kept confidential according to EEO guidelines.
Privacy Policy