Moving Beyond The Classic Build-Deploy-Analyze Bot Development Process For Enterprise

NLX is innovating enterprise bot development. Less time building, more time finessing the perfect customer journey in reaction to real-time data.

Derrick Bradley


This 'Thoughts by Andrei' is penned by Derrick Bradley, NLX's Chief Growth Officer. Learn more about Derrick here.

Despite all of the advancements being made in conversational AI the process of building a bot for enterprise remains laborious. Manual intervention is needed across the entire build-deploy-analyze arch of bot building, particularly in the areas of conversation flow design, content creation, system integration, and testing/debugging.

The Standard Build-Deploy-Analyze Process

1. Selecting a use case.
Before you start building you have to know what you want to build and why. This is usually driven by strategic discussions internally or well-known industry-wide use cases that are quickly becoming table stakes. It’s often the case that customers aren’t entirely sure where to start to get the most ROI out of their bot, in which case we suggest learning from your customers to prioritize based on what’s driving volume and demand.

2. Designing the conversation flow. One of the most important aspects of creating a conversational AI bot is designing the conversation flow. This involves mapping out all the possible conversation paths the bot may take, including the different ways customers may phrase their questions or requests. This can be a time-consuming and manual process that requires careful consideration of customer needs and expectations.

3. Creating the content. Writing content at every stage of a customer journey is largely a manual process. You have to consider the tone of voice, brand guidelines, personalization, length of response, channel, and so on. This is one of the first places we’re seeing generative AI provide some efficiencies and is something NLX is working hard on.

4. Integrating with other systems. Conversational AI bots often need to integrate with other systems to send and receive information that is pertinent to resolving a customer query. This can involve complex workflows and APIs that require manual configuration and testing.

5. Testing and debugging. Testing and debugging are important parts of the bot-building process. This involves manually testing the bot's conversation flow, NLP performance, and points of integration to make sure it’s working as intended. If errors are found, they are largely debugged and resolved manually.

6. Deploying and analyzing. Finally, after the bot has been deployed, the onus is on the customer to analyze its performance and collect feedback from customers. This involves mapping the performance of the bots to internal metrics which often requires an export to an enterprise customer’s BI infrastructure. 

Looking at this process, the question our team has been asking ourselves is: “How can we apply generative AI to reduce time-to-value for our customers?” While we’re still early in the process of answering these questions we’ve already started to experiment with (and implement) some key features and guidelines for customers. Here’s an early look at what we’re moving towards:

Designing conversation flows. We’re experimenting with various ways to train our bots on existing conversational data to learn the patterns and rules that govern a conversation’s flow. If we can turn the rules that govern a conversational flow into a reusable pattern language we can then turn around and automatically build or suggest conversation flows using this information. The objective here is to significantly reduce the effort involved in designing the conversation flow and ensure it's optimized for a customer’s requirements. The first version of this might look like suggesting a next step to the builder of a bot but we’re aiming higher than that. We want to be able to propose entire flows back to a customer that can then be amended and approved. Doing this right might mean doing away with generalized templates being used as starting blocks and jumping right to personalized bots designed with your rules and structures in mind.

Data collection and cleaning. We aim to completely automate data collection and preparation for specific use cases. The space we’re focused on applying this first is Knowledge Base chatbots. While integrations into systems like Twilio Segment or Zendesk are great (and we have them) they are no longer prerequisites to building Knowledge Base bots. Not only can we crawl an existing knowledge base without integrating, but we can also easily embed that knowledge into any point of an existing customer journey. For example, a customer might be rebooking a flight on a regional jet and have a question about carry-on weight limits. We can answer that question without disrupting the overall self-service flow. It goes without saying, but we can apply the same principles to unstructured data across an enterprise to reduce change management headaches.

Content creation. We’ve already applied some powerful generative AI capabilities using OpenAI’s underlying GPT model that allows our customers to respond to an inquiry using generative responses. It provides customers with fine-grained control over when and where those responses are used along with guardrails around what topics the bot will engage in. While we fine-tune this capability, we’re also exploring incorporating a brand’s guidelines, tone of voice, and so forth into voice and chat channels.

Testing and debugging. We expect to be able to automatically generate test cases to thoroughly test a conversation flow end to end by automatically identifying errors or areas that need improvement. We won’t be quick to move away from our existing testing and debugging process given the scale and complexity of many of our enterprise customers. For now, we see this as a way to expedite and enhance our process, not replace it.

If you squint, you can start to see the trendline: less time spent building bots and more time spent optimizing and tweaking performance to deliver effective customer experiences. If we can better anticipate and act on emergent customer needs and minimize build time we’ll unlock a level of performance and value that we have yet to really see on the market today. 

It’s essential to have a platform like Conversations by NLX that can knit all of these capabilities together. There’s a compounding effect that takes place when these capabilities are housed together. Standalone services that don’t interact with your systems of record, business rules, or have guardrails in place to safeguard your brand don’t mean much on their own. NLX brings all of this together to meet the demands of enterprise organizations. 

If you’re interested in exploring what’s possible with our latest generative AI capabilities, reach out here.

Derrick Bradley

Derrick has spent the last 10 years orchestrating innovation and transformation efforts for Fortune 500 firms. He previously worked at Undercurrent, an organizational design firm in NYC where he led multi-year transformations for firms like GE, PepsiCo, and American Express.

Since returning to Canada, he founded Opus, a consultancy that is bringing new ways of working to our most important institutions.

Derrick is an active angel investor, speaker, and advisor to startups.