Asks AI-Curious Customers to First Invest in an Ethical Groundwork

B. Shimmin

Summary Bullets:

• There has been a significant rush among technology providers to make artificial intelligence (AI) a self-service endeavor, to make it available to the broadest possible swath of business users.

• But in so doing, companies are creating unanticipated legal exposure for AI practitioners unprepared to protect AI from human bias. has added a new AI learning module to its Trailhead developer education platform with an interesting twist. Rather than teach developers how to build AI outcomes most efficiently, the company’s newest educational module asks that practitioners slow down and focus on creating ethically informed AI solutions.

The new Trailhead educational module entitled, “Responsible Creation of Artificial Intelligence,” calls attention to an often overlooked threat from AI, namely unwitting human biases and intentional human prejudices.

Within these new training materials, calls on Einstein developers to adopt its own set of core values of “trust, customer success, innovation, and equality.” The company goes so far as to suggest that developers who fail to adhere to these standards in creating AI algorithms may find themselves in breach of its acceptable use policy.

Why is referencing an acceptable use policy in conjunction with the ethical use of AI? Surely companies not engaged in outright nefarious endeavors would steer clear of anything overtly illegal in building AI outcomes. Certainly legislative controls such as GDPR and the California Consumer Privacy Act (CCPA) are very clear about what constitutes an unlawful use of consumer data. Companies need only adhere to such policies to avoid potential litigation or censure, right?

Not necessarily, because human biases and prejudices can find their way into any AI-informed solution without detection. Throughout the lifecycle of a given AI solution, from data collection to ongoing maintenance, subtle but hugely impactful notions of partiality can find their way in, thereafter altering the decisions made by both humans and AI automated routines.
Continue reading “ Asks AI-Curious Customers to First Invest in an Ethical Groundwork”