I may have. Inadvertently insulted Brett Taylor and Clay Bauer when I interviewed them last week about their new AI startup. His new company, Sierra, is developing AI-powered agents to “elevate the customer experience” for large companies. Among its original customers are WeightWatchers, Sonos, SiriusXM, and OluKai (an “Awaiian-inspired” clothing company). Sierra’s ultimate market is any company that interacts with its customers, which is a huge opportunity. Their plan strikes me as a validation of the widely held prediction that 2024 will be the year when the AI ​​models that have twisted our minds for the past year turn into real products. So when I greeted the cofounders, whom I’ve known for years, I commented that their company seemed “very nuts and bolts.”

Was it wrong to say that? “I don’t know if that’s a compliment or a criticism or just a fact,” says Taylor, who left his job as co-CEO of Salesforce to start Sierra. I assured him that I had seen more of him of late. “It’s not like you’re building. Girlfriends! I noted.

It’s important that two of the most visionary leaders in Silicon Valley are building an AI startup not to chase the nerd trophy of superintelligence, but to give non-techie, mainstream corporations the future of AI’s latest advances. can be used His experience has brought him toe-to-toe with leading figures in the industry. Taylor was the key developer of Google Maps in the aughts and was headed by Bavor. Google’s VR efforts. They are eager to assure me that their hearts are still in moonshot mode. Both feel that conversational AI is a breakthrough on par with graphical user interfaces or smartphones, and will have at least as much impact on our lives. Sierra just happens to focus on a specific, enterprise-y aspect of it. “In the future, a company’s AI agent — essentially that company’s version of AI — will be as important as their website,” says Taylor. “It will completely change the way companies exist digitally.”

In order to build its bots in a way that would perform this task effectively, pleasantly and safely, Sierra had to develop some innovations that would advance AI agent technology in general. And to deal with perhaps the most worrisome problem — deception that can misinform users — Sierra uses several different AI models together, with one model acting as a “supervisor.” To ensure that the AI ​​agent is not going into woo woo territory. . When something with real consequences is about to happen, Serra invokes the numbers perspective in his power. “If you interact with a WeightWatchers agent and you write a message, about four or five different major language models are called to decide what to do,” says Taylor.

Because of the power, vast knowledge, and extraordinary understanding of AI’s powerful big language models, these digital agents can understand company values ​​and procedures as well as a human—and perhaps better than some disgruntled worker up north. . Dakota Boiler Room. The training process is more akin to imparting rules and regulations to an employee in a system. What’s more, these bots are capable of giving some, um, agency to meet the caller’s needs. “We found that many of our customers had one policy, and then another policy behind their policy, which actually mattered,” says Bavor. Sierra’s agents are sophisticated enough to know that—and smart enough not to spill the beans right away, offering customers special deals only if they push. Sierra’s goal is nothing less than to move automated user interactions from hell to joy.

Thanks Sierra

This reached the ears of one of Sierra’s first clients, Weight Watchers. When Taylor and Beaver told CEO Seema Sistani. That AI agents could be real and relatable, she was curious. But the clincher, he told me, was when the cofounders told him that conversational AI could “empathize at scale.” She was in, and now WeightWatchers is using agents created by Sierra for its customer interactions.

Okay but sympathy? The Merriam-Webster dictionary defines it as “the act of understanding, being aware of, being sensitive to, and experiencing well the feelings, thoughts, and experience of another.” I asked Sistani if ​​it might be a contradiction to say that a robot could be empathetic. After a pause where I could almost hear the gears grinding in his brain, he gave an answer. “It’s interesting when you put it that way, but we’re living in a 2D world. Algorithms are helping us determine the next connection we see and the relationship we make. We as a society But it has passed. They This concept means that interactions with robots cannot be authentic. Of course IRL is the ideal, she is quick to say, and agents complement real life rather than substitute it. But she will not back down from her claim of sympathy.

When I ask him for examples, Sistani tells me about a conversation where a WW member said he had to cancel his membership because of difficulties. The AI ​​agent bombarded her with love: “I’m so sorry to hear that … those difficulties can be so difficult … let me help you with that.” And then, like a fairy godmother, the agent helped him find an alternative. “We’re very clear that it’s a virtual assistant,” says Sistani. “But if we weren’t, I don’t think you could tell the difference.”