Open Source Embeddings
Open source embedding components provide access to locally-hosted and community-driven embedding models.
Hugging Face Embeddings
This component loads embedding models from HuggingFace.
Use this component to generate embeddings using locally downloaded Hugging Face models. Ensure you have sufficient computational resources to run the models.
Inputs
Cache Folder
Cache Folder
Folder path to cache HuggingFace models
Encode Kwargs
Encoding Arguments
Additional arguments for the encoding process
Model Kwargs
Model Arguments
Additional arguments for the model
Model Name
Model Name
Name of the HuggingFace model to use
Multi Process
Multi-Process
Whether to use multiple processes
Outputs
embeddings
Embeddings
The generated embeddings
Hugging Face Embeddings Inference
This component generates embeddings using Hugging Face Inference API models and requires a Hugging Face API token to authenticate. Local inference models do not require an API key.
Use this component to create embeddings with Hugging Face's hosted models, or to connect to your own locally hosted models.
Inputs
API Key
API Key
The API key for accessing the Hugging Face Inference API.
API URL
API URL
The URL of the Hugging Face Inference API.
Model Name
Model Name
The name of the model to use for embeddings.
Cache Folder
Cache Folder
The folder path to cache Hugging Face models.
Encode Kwargs
Encoding Arguments
Additional arguments for the encoding process.
Model Kwargs
Model Arguments
Additional arguments for the model.
Multi Process
Multi-Process
Whether to use multiple processes.
Outputs
embeddings
Embeddings
The generated embeddings.
Ollama Embeddings
This component generates embeddings using Ollama models.
For a list of Ollama embeddings models, see the Ollama documentation.
To use this component in a flow, connect BroxiAI to your locally running Ollama server and select an embeddings model.
In the Ollama component, in the Ollama Base URL field, enter the address for your locally running Ollama server. This value is set as the
OLLAMA_HOST
environment variable in Ollama. The default base URL ishttp://127.0.0.1:11434
.To refresh the server's list of models, click .
In the Ollama Model field, select an embeddings model. This example uses
all-minilm:latest
.Connect the Ollama embeddings component to a flow. For example, this flow connects a local Ollama server running a
all-minilm:latest
embeddings model to a Chroma DB vector store to generate embeddings for split text.

For more information, see the Ollama documentation.
Inputs
Ollama Model
String
Name of the Ollama model to use (default: llama2
)
Ollama Base URL
String
Base URL of the Ollama API (default: http://localhost:11434
)
Model Temperature
Float
Temperature parameter for the model. Adjusts the randomness in the generated embeddings
Outputs
embeddings
Embeddings
An instance for generating embeddings using Ollama
LM Studio Embeddings
This component generates embeddings using LM Studio models.
Inputs
model
Model
The LM Studio model to use for generating embeddings
base_url
LM Studio Base URL
The base URL for the LM Studio API
api_key
LM Studio API Key
API key for authentication with LM Studio
temperature
Model Temperature
Temperature setting for the model
Outputs
embeddings
Embeddings
The generated embeddings
Usage Notes
Cost-Effective: No API costs after initial setup
Privacy: Models run locally, keeping your data private
Customization: Full control over model parameters and configurations
Offline Capability: Works without internet connection once models are downloaded
Community Models: Access to thousands of open source embedding models
Last updated