showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

Operationalising “Responsible Artificial Intelligence” (RAI) in Public Administration (Tier 1 Project)

Project Information

The proposed research project breaks new ground on a complex and urgent task: operationalising “responsible artificial intelligence” (RAI) in the context of public administration. AI tools are, increasingly, being applied in “administration” – a broad term intended to capture the full range of official decision-making and public service delivery, regardless of modality (ie, including public-private partnerships and other contractual arrangements as well as ideal-typical “government” administration).

The last decade has seen a proliferation of ethical principles applicable to AI in various contexts, including administration. However, these principles are generally aspirational rather than operational. They set expectations of “responsibility” (or “trustworthiness”, “accountability”, etc) but do not say much about how to get there. There is a clear implementation gap between:

  • Ethical principles (many of which are open textured and contested);
  • Existing legal structures, including redress mechanisms; and
  • Technical standards and system functionalities/capabilities.

Operationalising AI ethics requirements will involve innovations at various layers of the technology stack. Often, making an AI system (such as a large language model) “ethical” involves super-imposing constraints on its capabilities that diminish some desirable functional parameters to avoid ethical risk.

Just as importantly, it requires innovation in the “social” layers of the tool, especially in the organisational context of application and managerial oversight. Finally, it may require review and augmentation of external accountability mechanisms, including mechanisms of legal redress such as judicial review or private law (e.g. tort) liability when things have gone wrong.

The true object of regulation is a socio-technical system, not just a technical one. This demands collaboration across disciplines and indeed sub-disciplines. The project is a collaboration between two research centres ideally situated to tackle this complex problem together: the SMU Centre for Digital Law (CDL) and the SMU Centre for Research in Intelligence Software Engineering (RISE). Together, the Centres can explore the intersection of AI, software engineering, cyber security, law, ethics, and human-machine interaction in a truly transdisciplinary research project.

The purpose of this Tier 1 Project is to: 

  • Scope the implementation gap and develop some theory and methodology to support a “joined up”, transdisciplinary approach to operationalising AI ethics; 
  • Establish a working group across CDL and RISE;
  • Seed an international collaboration network with motivated colleagues at leading universities abroad through the organisation of a project conference; 
  • Develop a suite of scenarios that highlight the complexities involved in codifying high-level legal requirements and translating these into low-level properties that are amenable to automatic or semi-automatic compliance assessment and verification as well as non-compliance detection and avoidance. 
  • Construct an initial collection of prototypes designed to partly resolve the challenges of effectively formalizing and implementing RAI for administrative applications. These prototypes should also bring attention to deep legal and technical challenges in anticipation of the forthcoming Tier 3 project proposal.