Data Governance Reflects and Reinforces Data Ethics
Data governance and data ethics are intrinsically linked. However, while data governance refers to the tactical considerations of how data is collected, handled and used, data ethics encomposses the thought process of how we get to that point. In other words, data ethics explores how data related practices impact people. It describes the moral considerations that go into developing data governance frameworks along with data strategies in general.
However, data ethics are more than just a moral obligation, it's a business imperative. Organizations with strong data ethics codes not only are more likely to avoid regulatory fines and lawsuits, their data tends to be cleaner, higher quality and more usable. As a result, their data-driven products and applications deliver more value.
Data Ethics is Centered Around Accountability
Accountability refers to an organization's reflective, reasonable and systematic use and protection of personal data. In order to ensure data is properly handled throughout its lifecycle, companies must instill a culture of accountability whereby people approach proper enterprise data management as a mission-critical responsibility instead of an obligation.
In order to do this, many companies install a data ethics & accountability committee to report into the C-level. One such company is Adobe. As explained on their website,”Since AI lives at the intersection of technology and human insight, we needed a range of perspectives to help us form our principles and determine our approach. Our ethics committee includes experts from around the world with diverse professional backgrounds and life experiences, and we’re confident in their ability to guide our efforts.
The board makes recommendations to help guide our development teams, and it also reviews new AI features and products to ensure that they live up to our principles. The board is empowered to stop deployment of any feature that doesn’t meet our standards.”
Data Ethics is a Human Rights Issue
Every day we see stories about biased artificial intelligence (AI). Just this week Twitter revealed its image-cropping algorithm excluded Black people from photos. In addition, it was recently revealed that an AI tool Uber used to conduct security tests on its drivers has a 20.8% failure rate for females with darker skin, as opposed to 0% when tested on white men.
According Stanford’s Social Innovation review, a recent study on gender-biased AI systems found that:
- 70% resulted in lower quality of service for women and non-binary individuals. Voice-recognition systems, increasingly used in the automotive and health care industries, for example, often perform worse for women.
- Unfair allocation of resources, information, and opportunities for women manifested in 61.5% of the systems we identified as gender-biased, including hiring software and ad systems that deprioritized women’s applications.
In other words, biased AI simply doesn’t work.
The most common cause of AI is biased training data. Take Amazon, for example, who came under fire when its hiring algorithm was revealed to be sexist. By training their new AI on historical hiring data, they basically instilled it with their own historic prejudices. As a result, the algorithm, just as the human recruiters that went before it, favored white, middle age men for leadership roles.
Though curating high quality, unbiased training data is one of the key ways of combating AI bias, it isn’t always possible. However, what is possible is hiring diverse data science teams and empowering them to identify and proactively address problematic outcomes.
Become a Member of the AI, Data & Analytics Network TODAY!