category
The following plugins are available for LLM. Here’s how to install them.
Local models
These plugins all help you run LLMs directly on your own computer:
-
llm-llama-cpp uses llama.cpp to run models published in the GGUF format.
-
llm-mlc can run local models released by the MLC project, including models that can take advantage of the GPU on Apple Silicon M1/M2 devices.
-
llm-gpt4all adds support for various models released by the GPT4All project that are optimized to run locally on your own machine. These models include versions of Vicuna, Orca, Falcon and MPT - here’s a full list of models.
-
llm-mpt30b adds support for the MPT-30B local model.
-
llm-ollama adds support for local models run using Ollama.
Remote APIs
These plugins can be used to interact with remotely hosted models via their API:
-
llm-mistral adds support for Mistral AI’s language and embedding models.
-
llm-gemini adds support for Google’s Gemini models.
-
llm-claude by Tom Viner adds support for Claude 2.1 and Claude Instant 2.1 by Anthropic.
-
llm-claude-3 supports Anthropic’s Claude 3 family of models.
-
llm-command-r supports Cohere’s Command R and Command R Plus API models.
-
llm-anyscale-endpoints supports models hosted on the Anyscale Endpoints platform, including Llama 2 70B.
-
llm-replicate adds support for remote models hosted on Replicate, including Llama 2 from Meta AI.
-
llm-palm adds support for Google’s PaLM 2 model.
-
llm-openrouter provides access to models hosted on OpenRouter.
-
llm-cohere by Alistair Shepherd provides
cohere-generate
andcohere-summarize
API models, powered by Cohere. -
llm-bedrock-anthropic by Sean Blakey adds support for Claude and Claude Instant by Anthropic via Amazon Bedrock.
-
llm-bedrock-meta by Fabian Labat adds support for Llama 2 by Meta via Amazon Bedrock.
-
llm-together adds support for the Together AI extensive family of hosted openly licensed models.
If an API model host provides an OpenAI-compatible API you can also configure LLM to talk to it without needing an extra plugin.
Embedding models
Embedding models are models that can be used to generate and store embedding vectors for text.
-
llm-sentence-transformers adds support for embeddings using the sentence-transformers library, which provides access to a wide range of embedding models.
-
llm-clip provides the CLIP model, which can be used to embed images and text in the same vector space, enabling text search against images. See Build an image search engine with llm-clip for more on this plugin.
-
llm-embed-jina provides Jina AI’s 8K text embedding models.
-
llm-embed-onnx provides seven embedding models that can be executed using the ONNX model framework.
Extra commands
-
llm-cmd accepts a prompt for a shell command, runs that prompt and populates the result in your shell so you can review it, edit it and then hit
<enter>
to execute orctrl+c
to cancel. -
llm-python adds a
llm python
command for running a Python interpreter in the same virtual environment as LLM. This is useful for debugging, and also provides a convenient way to interact with the LLM Python API if you installed LLM using Homebrew orpipx
. -
llm-cluster adds a
llm cluster
command for calculating clusters for a collection of embeddings. Calculated clusters can then be passed to a Large Language Model to generate a summary description.
- 登录 发表评论