BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Advanced Research and Invention Agency (ARIA) | Safeguarded AI
DTSTART;VALUE=DATE:20250102
DTSTAMP:20260427T224108Z
UID:4a1f1a75e8ee160796b818160d8d8a1ae08d096b2faef085af6d7736
CATEGORIES:Call for proposal
DESCRIPTION:Aim: The foundation is looking to support teams from the econo
 mic\, social\, legal and political sciences to consider the sound socio-te
 chnical integration of Safeguarded AI systems. This solicitation seeks R&
 D Creators – individuals and teams – to work on problems that are plau
 sibly critical to ensuring that the technologies developed a part of the p
 rogramme will be used in the best interest of humanity at large\, and that
  they are designed in a way that enables their governability through repre
 sentative processes of collective deliberation and decision-making. \n\nA
  few examples of the open problems to address:\n\n	Qualitative deliberatio
 n facilitation: What tools or processes best enable representative input\
 , collective deliberation and decision-making about safety specifications\
 , acceptable risk thresholds\, or success conditions for a given applicati
 on domain?\n	Quantitative bargaining solutions: What social choice mechan
 isms or quantitative bargaining solutions could best navigate irreconcilab
 le differences in stakeholders’ goals\, risk tolerances\, and preference
 s\, in order for Safeguarded AI systems to serve a multi-stakeholder notio
 n of public good?\n	Governability tools for society: How can we ensure th
 at Safeguarded AI systems are governed in societally beneficial and legiti
 mate ways?\n	Governability tools for organisations: Organisations develop
 ing Safeguarded AI capabilities have the potential to create significant e
 xternalities – both risks and benefits. What set of decision-making and 
 governance mechanisms are best to ensure that entities developing or deplo
 ying Safeguarded AI capabilities have and maintain these externalities as 
 appropriately major factors in their decision-making?\n\nCall is also open
  to applications proposing other lines of work which illuminate critical s
 ocio-technical dimensions of Safeguarded AI systems\, if they propose solu
 tions to increase assurance that these systems will reliably be developed 
 and deployed in service of humanity at large.\n\nDuration: 18 months\n\nFu
 nding: ARIA expects to invest £3.4 million across 2-6 teams\n\nEligibilit
 y: \n\n\n	Applications from across the R&D ecosystem\, including individua
 ls\, universities\, research institutions\, small\, medium and large compa
 nies\, charities and public sector research organisations are welcomed\n	O
 verseas applicants are advised that ARIA's primary focus will be on fundin
 g those who are based in the UK or those willing to conduct all or part of
  the project from the UK. However\, funding will be available to applicant
 s outside the UK if ARIA believes the proposed project can significantly b
 enefit the UK (examples of benefit listed here\, p.23)\n\nHow to Apply: Ap
 plications must be submitted on ARIA’s own platform\, via the applicant
 ’s account. You’ll find a walkthrough of the platform here.\n\nDeadlin
 e\, Concept paper: 02 January 2025\n\nFurther information\n\n\n	More infor
 mation about the program is available here and full call here\n	Applicatio
 n portal can be found here\n	For any other questions\, please contact the 
 Research Office.\n
LOCATION:
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
