Allow AI Agents to Connect to Any LLM via API (Bring Your Own LLM / BYOK)
in progress
C
Colvo Media
As GoHighLevel continues expanding its artificial intelligence capabilities (Conversation AI, Voice AI, and AI Agents), there is a clear need for greater flexibility in choosing the language models used behind the system.
Currently, users are limited to the LLM providers that GoHighLevel chooses to integrate directly. However, the AI ecosystem evolves extremely fast, with new models appearing constantly. Agencies and developers need the ability to choose the model that best fits their use case, cost structure, performance requirements, or language support.
It would be very valuable to enable a Bring Your Own LLM (BYO-LLM) approach, allowing AI Agents to connect to any external language model via API.
This could be configured easily within the AI Agent settings by allowing users to enter:
* API endpoint
* API key / secret key
* model name
* basic parameters such as temperature or max tokens
This should support two types of connections:
- Direct connections to LLM providers, for example:
OpenAI, Anthropic (Claude), Google Gemini, xAI (Grok), Mistral, DeepSeek, or self-hosted open-source models.
- Connections through aggregators such as OpenRouter, allowing access to multiple models through a single endpoint.
Key benefits include:
Flexibility to choose the best model depending on the use case.
Cost optimization by routing usage to more efficient providers.
Future-proof architecture that does not depend on specific integrations.
Greater ability to build advanced AI workflows by connecting external AI stacks or agents.
In practice, this would turn GoHighLevel into an AI-agnostic platform where the CRM manages workflows and automation while the user decides which language model to use.
Log In
A
Abhishek Kumar
Colvo Media More LLM are coming like claude and gemini models
But it wont be bring your own keys as we already charge at token rate as per market
C
Colvo Media
Abhishek Kumar Thanks for the update. I understand that more LLMs will be added, but considering how fast new models are being released, how does HighLevel plan to avoid falling behind in terms of model availability?
Will there be any mechanism or strategy to keep the platform updated with new models quickly, or some way for agencies and developers to access newer models as they appear?
A
Abhishek Kumar
Colvo Media we are working on a framework where we can add new models by just something like plug and play
Ofcourse when a new model is released we will run bunch of evaluations
Once satisfied we will release the new models within 2 - 3 months.
A
Abhishek Kumar
marked this post as
in progress
More LLM are coming like claude and gemini models
But it wont be bring your own keys as we already charge at token rate as per market