NLX gives enterprises choice over large language models
NLX is excited to announce that customers can now access a collection of large language models (LLMs) from inside of our platform, Conversations by NLX. While everyone has rushed to incorporate OpenAI's GPT-3 and GPT-4 LLM into their products (including us), we believe it's important to offer a wide variety of options to serve our enterprise customers' needs.
New Models Available in Conversations by NLX
- Cohere. A company that builds enterprise-grade language models and offers them through secure, scalable APIs for use cases ranging from content generation to summarization and search – a strong alternative to OpenAI's offering.
- BLOOM. A autoregressive Large Language Model (LLM), trained to continue text from a prompt on vast amounts of text data. It is able to output coherent text in 46 languages and 13 programming languages that is hardly distinguishable from text written by humans.
- AlexaTM 20B. A 20-billion-parameter sequence-to-sequence transformer model created by the Alexa Teacher Model (AlexaTM) team at Amazon. The model was trained on a mixture of Common Crawl (mC4) and Wikipedia data across 12 languages using denoising and Causal Language Modeling (CLM) tasks.
While all of these models have their own strengths, we are particularly excited about Cohere because of its fitness for enterprise. Cohere can be deployed on public, private, or hybrid clouds to ensure data security standards are met or exceeded which is something we're always looking for. With Cohere, an enterprise can quickly analyze and understand unstructured data, such as text and audio, in a more efficient and effective way. The platform is capable of extracting insights and identifying patterns across disconnected systems of record which is a recurring pain point we see with all of our enterprise customers. Our integration with Cohere effectively solves a large portion of this pain point. Teams inside of large enterprises do not need to modify their behavior, tool usage, or change management process to benefit from Cohere's LLM capabilities. It doesn't matter if one CMS doesn't speak to another CMS anymore. We can train a model to capture information from disparate systems to help make internal decisions or communicate with customers using the correct data and wrap that inside of a conversational journey in any channel.
We are continuously evaluating all publicly available large language models as the open source community works to roll them out by running them through an internal litmus test on their viability for the enterprise.
To host and run these large language models (with the exception of Cohere which is exposed via API) we selected SageMaker, AWS' end-to-end machine learning platform. This choice was made in response to the growing demand from enterprise customers to leverage generative AI in a cost effective, scaleable, secure, and compliant way. SageMaker delivers on all four of these criteria, often exceeding the most pressing needs we've seen come our way.
Providing access to multiple LLMs builds on our underlying belief that flexibility at scale is essential to delivering value to the enterprise. NLX's core architecture has purposefully been designed to be un-opinionated, making it easy to mix and match best-in-class providers. We already allow customers to select their NLP and now we're expanding that design principle to LLMs.
The Benefits of Multiple LLMs
- Improved accuracy: broadly speaking, different LLMs can be trained on different datasets and tuned for different purposes improving the outcome for an enterprise's specific domain or industry.
- Increased versatility: by leveraging multiple LLMs, enterprises can handle a wider range of use cases, such as assisting with administrative overhead for the global crew operations of an airline or internal + external merchandising data for a global retailer making buying decisions.
- Flexibility: enterprises can select the best LLM for their needs (reliability, security, speed, domain knowledge), rather than relying on a single LLM that may not be optimal for all situations.
- Competitive advantage: access to multiple LLMs can provide enterprises with a competitive advantage by allowing them to develop more advanced AI solutions than their competitors.
Bringing all of these components together inside of our platform is what makes our conversational design capabilities so powerful. You can build one set of conversations using Amazon Lex and GPT-3 focused on automating inbound customer service inquiries and improving CSAT scores while another set of conversations could opt for Cohere and our NetSuite integration to train an internal bot to assist with inventory optimization issues. NLX provides the guardrails, training, and expertise to get the most out of any combination of these models.
With the added flexibility we're keenly aware of the need for strategic guidance and input from experts in the field to build the best experiences and get the most from these models. It's why NLX has been busy building its own services teams and suite of integration partners that are experts in enterprise change management, conversational design, and generative AI. We're incredibly excited to see what customers build on top of Conversations by NLX with this latest release.