Do we have a right to Audit when we use personalization audit systems? Building on prior work in discrimination detection in data mining, I propose algorithm
auditing as a compatible ethical duty for providers of content personalization systems to maintain the transparency of political discourse. I explore barriers to auditing that reveal
the practical limitations on the ethical duties of service providers. Content personalization systems can function opaquely and resist auditing. However, the belief
that highly complex algorithms, such as bots using machine learning, are incomprehensible to human users should not be an excuse to surrender high quality
political discourse. Auditing is recommended as a way to map and redress algorithmic political exclusion in practice. However, the opacity of algorithmic decision making poses
a significant challenge to the implementation of auditing.