
Summary Bullets:
• Despite the incredible interest in generative AI (GenAI), enterprises worry that large language models (LLMs) will hallucinate, create toxic or biased content, and that their use will cause data leakage, among numerous other concerns.
• At Dreamforce ‘23, Salesforce highlighted the recently released Einstein Trust Layer, a framework that secures corporate data, evaluates content for toxicity, masks sensitive information, and provides an audit trail when using GenAI.
AI took center stage at Salesforce’s Dreamforce ’23 conference. During his keynote, Marc Benioff announced that the world is in an AI revolution, and that AI could change anything and will impact everything. Although Dreamforce was all about GenAI this year, AI isn’t a new focus for Salesforce. The company had already embedded AI capabilities into many of solutions across its portfolio. Furthermore, it acquired natural language processing (NLP) expertise via its acquisition of Narrative Science in 2019. What is new this year, however, is that Salesforce is embedding GenAI capabilities into just about all solutions.
Salesforce recently announced several tools, including Marketing GPT, Commerce GPT, and Sales GPT, that enable customers to retrieve their corporate information in a natural sounding narrative. Tableau Pulse enables line of business users to interact with analytics dashboards via text and suggests additional prompts for further analysis and drill downs. Tableau Einstein (formerly branded Tableau GPT) targets the analyst community and enables them to interact with tools using textual commands. Other big announcements from Dreamforce included a slew of high-productivity tools including GenAI driven Einstein Copilot to provide recommendations and assistance across all Salesforce apps; and Einstein Copilot Studio, which includes tools for building AI assistants, such as Prompt Builder (creates and deploys prompts customized for a company’s communication style), Skills Builder (creates pre-built custom actions), and Model Builder (enables customers to bring their own predictive model to Salesforce). Model Builder supports integrations with Amazon Bedrock, Amazon SageMaker, Anthropic, Cohere, Databricks, Google’s Vertex AI, and OpenAI.
However, despite the incredible interest in GenAI, enterprises are wary of the new technology. They worry that LLMs will hallucinate or be poisoned, or will create content that contains bias or toxic content. All organizations are highly concerned about data privacy and data leakage; they don’t want corporate data, or the prompts created by employees, to be used to train a LLM or to enter the public domain. Customer-facing teams want responses intended for external audiences to reflect their corporate brand and style instead of sounding generic. Furthermore, IT departments aren’t sure which LLM will work best for which applications and how much customization of LLMs will be required, or whether they will need to stack LLMs (i.e., use multiple LLMs). They lack GenAI specific skills, such as expertise in prompt engineering. And finally, enterprises want to ensure that GenAI-driven applications align to corporate ethics policies, don’t withhold or grant privileges in an inequitable manner, and adhere to regulations, including those related to protection of intellectual property or copyrights.
Recognizing that it will need to help mitigate many of these concerns (especially those related to hallucinations, toxicity, bias and data privacy) to promote adoption of its GenAI-powered solutions, Salesforce talked up the Einstein Trust Layer. This framework established processes and guardrails for using corporate data with LLMs. It provides dynamic grounding, which provides context for prompts and ensures that the correct corporate data is retrieved from the Salesforce Data Cloud. It will include data masking so that sensitive information cannot be read by an LLM. Salesforce already has zero retention agreements in place with several LLMs that contractually prevent the LLM from using Salesforce customers’ information or their prompts to train models or to improve their products. The Einstein Trust Layer framework also includes a tool for toxicity detection. The tool evaluates content for various subcategories of toxicity (violent, profane, biased, racial, etc.) assigning a numerical score to each, as well as assigning an overall toxicity score. The framework also includes auditing capabilities, so that all metadata (requesting user, prompt, response, and toxicity scores) is time stamped and documented. Salesforce also includes a ‘human in the loop’ in processes so that there is oversight of content sent to customers.
Salesforce is wise to develop and communicate a clear framework that establishes best practices, along with guardrails, for the use of GenAI. It is also well-placed. The company enjoys a reputation as a thought leader in corporate responsibility and ethics, on which it will need to lean heavily. But it is a tough ask – convincing customers to trust GenAI won’t be easy and will require a significant investment of time and effort by teams across the company. Furthermore, Salesforce’s strategy will likely require numerous tweaks along the way. But if Salesforce can get it right, the company will not only drive its own business forward but will be viewed as a market leader that forged a path for others to follow.
