In a lengthy post, Wednesday, Facebook CEO Mark Zuckerberg addressed the problems with “algorithmic bias” and how to create a better appeals process.

“Everything we’ve discussed so far depends on building artificial intelligence systems that can proactively identify potentially harmful content so we can act on it more quickly. While I expect this technology to improve significantly, it will never be finished or perfect,” declared Zuckerberg. “A fundamental question is how we can ensure that our systems are not biased in ways that treat people unfairly. There is an emerging academic field on algorithmic fairness at the intersection of ethics and artificial intelligence, and this year we started a major effort to work on these issues.”

“Our goal is to develop a rigorous analytical framework and computational tools for ensuring that changes we make fit within a clear definition of fairness. However, this is not simply an AI question because at a philosophical level, people do not broadly agree on how to define fairness,” he explained. “To demonstrate this, consider two common definitions: equality of treatment and equality of impact. Equality of treatment focuses on ensuring the rules are applied equally to everyone, whereas equality of impact focuses on ensuring the rules are defined and applied in a way that produces equal impact. It is often hard, if not impossible, to guarantee both.”

“Focusing on equal treatment often produces disparate outcomes, and focusing on equal impact often requires disparate treatment. Either way a system could be accused of bias. This is not just a computational problem — it’s also an issue of ethics,” the Facebook CEO continued. “Overall, this work is important and early, and we will update you as it progresses.”

On the topic of Facebook’s appeal system, Zuckerberg declared, “Any system that operates at scale will make errors, so how we handle those errors is important. This matters both for ensuring we’re not mistakenly stifling people’s voices or failing to keep people safe, and also for building a sense of legitimacy in the way we handle enforcement and community governance.”

“We began rolling out our content appeals process this year. We started by allowing you to appeal decisions that resulted in your content being taken down. Next we’re working to expand this so you can appeal any decision on a report you filed as well,” he proclaimed. “We’re also working to provide more transparency into how policies were either violated or not.”

“In practice, one issue we’ve found is that content that was hard to judge correctly the first time is often also hard to judge correctly the second time as well,” Zuckerberg concluded. “Still, this appeals process has already helped us correct a significant number of errors and we will continue to improve its accuracy over time.”

Charlie Nash is a reporter for Breitbart Tech. You can follow him on Twitter @MrNashington, or like his page at Facebook.