The development of privacy-enhancing technologies has made immense progress
in reducing trade-offs between privacy and performance in data exchange and
analysis. Similar tools for structured transparency could be useful for AI
governance by offering capabilities such as external scrutiny, auditing, and
source verification. It is useful to view these different AI governance
objectives as a system of information flows in order to avoid partial solutions
and significant gaps in governance, as there may be significant overlap in the
software stacks needed for the AI governance use cases mentioned in this text.
When viewing the system as a whole, the importance of interoperability between
these different AI governance solutions becomes clear. Therefore, it is
imminently important to look at these problems in AI governance as a system,
before these standards, auditing procedures, software, and norms settle into
place.
Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Bluemke_E/0/1/0/all/0/1">Emma Bluemke</a>, <a href="http://arxiv.org/find/cs/1/au:+Collins_T/0/1/0/all/0/1">Tantum Collins</a>, <a href="http://arxiv.org/find/cs/1/au:+Garfinkel_B/0/1/0/all/0/1">Ben Garfinkel</a>, <a href="http://arxiv.org/find/cs/1/au:+Trask_A/0/1/0/all/0/1">Andrew Trask</a>