top of page
  • Writer's pictureCLTR

Expression of Interest: AI policy researcher (contractor)

Deadline: ongoing


Summary:

The Centre for Long-Term Resilience is looking for a part-time, temporary AI policy researcher to support our AI policy unit on a contractor basis. This is an exciting opportunity to help our team tackle some of today’s most challenging and important AI policy questions. We are looking for people who are passionate about mitigating the risks of AI and have experience of independently delivering high-quality research.


We are open to different working arrangements based on what works for you, from short-term support on a project (e.g. 3-5 days over one month) to a longer-term arrangement (e.g. 2 days per week for 2-6 months). 


You will be working with our team at our office in the heart of Whitehall, if this suits you, or you can work remotely.


Job title: AI policy researcher (contractor)


Type: We plan to find an arrangement that works for both of us. This could range from work on a single ad hoc project (e.g. 5 days’ support over 30 days), to longer-term arrangements (e.g. 2 days per week over 2-6 months).


Point of contact: AI Policy Manager


Salary: In the range from approx. £50 - £70 per hour, subject to experience


Location: Westminster, London, but we also consider remote candidates within 6h of time difference to London.


Start date: ASAP



Application deadline: Ongoing (we will be monitoring applications as they come in)


Full Details:


About CLTR

The Centre for Long-Term Resilience (CLTR) is a UK-based think tank with a mission to transform global resilience to extreme risks: high-impact threats with global reach. We do this by working with governments and other institutions to improve relevant governance, processes and decision-making, with a particular focus on the UK government.


We focus on three main areas where effective governance today could substantially mitigate both current and future threats: the safe development and use of AI; biosecurity and pandemic preparedness, and improving how governments manage these types of extreme risks. We work with experts and policymakers to develop concrete and actionable policy recommendations in these areas, and to advocate for those recommendations in senior policy communities. 


You would be joining a small but ambitious team with experience across academia, government, non-profits, and the private sector. Over the last year, we have built a leadership team with expertise across our key policy areas, and are now beginning to expand the organisation to increase our capacity for impact.


What you’ll do

The purpose of the role is to provide the CLTR’s AI policy unit with high-quality research briefs to support our policy development. These research tasks might include:


  1. Conducting literature and landscape reviews. For example, reviewing draft frontier AI legislation proposals in the UK and around the world, identifying common features and key divergences; or reviewing the existing literature on the persuasiveness of AI models.

  2. Drafting sections of reports. For example, writing a section (e.g. 750 words) on the debate on release controls for open source models, for a report we are writing on the misuse of open source.

  3. Writing briefs for a workshop, meeting or event. For example, providing a synthesis of interviewees’ perspectives on export controls that the AI Policy Manager can use in preparation for an appearance on a panel.

  4. Other policy and research support as needed. For example, pulling together a range of our existing policy proposals into a short document for sharing with stakeholders.

We will work closely with you to make sure you understand the context and purpose of the brief while allowing you to work independently in your own time to produce outputs by the agreed deadline.


A typical task could look like the following: i) we give you a brief with context of the requirement, a research question, and guidance on methods and the output we are looking for; ii) we jump on a call to clarify any questions you might have; ii) you conduct research and write up a clear, concise draft (e.g. 2-5 pages) in your own time (e.g. over the course of 1-4 weeks); iii) we continue to meet throughout delivery (e.g. once per week), and provide written feedback on drafts.


We are open to different working arrangements depending on your preferences and availability. This could include contracting you for one project at a time, such as delivering a research task over 3-5 days across a one-month period; or, it could involve a more consistent, ongoing temporary arrangement, such as 2 days per week for several months. You’ll have a chance to explain your availability and preferences in the application form. We also want you to know that this is a fixed-term contract and is unlikely to be extended or lead to a permanent role.


What you’ll bring

We are looking for someone with a proven ability to deliver high-quality research with minimal supervision, with a strong interest in AI and its risks. We need you to have a strong ability to digest a brief and understand what’s needed of you (including asking for more information when you need it), be able to take a brief and deliver a piece of work fairly independently, and be reliable in meeting deadlines.


We are particularly looking for the following:

  • Evidence of ability to conduct high-quality research, such as in academic qualifications, publications, or performance in a policy role requiring a similarly high standard of research

  • A genuine and demonstrable passion for AI and addressing its risks to people and societies

  • A commitment to research integrity, intellectual honesty and humility, and an ability to enthusiastically receive and action feedback

  • Strong attention to detail, especially in the preparation and formatting of written work

  • The ability to juggle several competing priorities in a high-tempo environment and prioritise effectively between them

  • Great interpersonal skills, able to communicate efficiently in a professional manner


What we’ll offer you 

Working at CLTR means using your experiences, skills and passion to contribute to our mission of transforming global resilience to extreme risks. You'll be working alongside purposeful and committed colleagues to support governments and other institutions in changing the ways they work.


As well as the competitive hourly rate we pay for this contractor role (£50 - £70 per hour), we can also offer insight into the practical work of a think tank and professional experience that could help you take your next step in the world of AI policy.


And finally…

We know that our organisation is strengthened by the rich variety of perspectives that a truly diverse workforce brings. We are committed to ensuring that our selection processes are as fair as they can possibly be, providing a level playing field for anyone who wants to come and work at CLTR.


We are committed to creating an inclusive organisation where everyone we work with is able to thrive. We are also open to being flexible wherever possible to ensure you are able to do your best work.


Express your interest

To express your interest in this temporary role, please complete this form: https://airtable.com/app2N1yWbGicxufyJ/pagL4sTbzLbqnx55O/form. We’ll continue to monitor applications and reach out to you if we think there might be a fit. Thank you for reading. If you have any questions, please reach out to hiring@longtermresilience.org.

Recent Posts

See All

The near-term impact of AI on disinformation

by Tommy Shaffer Shane Read the full policy paper here: It is rightly concerning to many around the world that AI-enabled disinformation could represent one of the greatest global risks we face, wheth

The Centre for Long-Term Resilience is a non-profit company registered in England and Wales under the name Alpenglow Group Limited (12308171). Our registered office address is 71-75 Shelton Street, Covent Garden, London, England, WC2H 9JQ.

Privacy policy

Image credit: NASA

bottom of page