Skip to content

Releases: simonw/llm

0.10a1

12 Sep 06:00
Compare
Choose a tag to compare
0.10a1 Pre-release
Pre-release
  • Support for embedding binary data. #254
  • llm chat now works for models with API keys. #247
  • llm chat -o for passing options to a model. #244
  • llm chat --no-stream option. #248
  • LLM_LOAD_PLUGINS environment variable. #256
  • llm plugins --all option for including builtin plugins. #259
  • llm embed-db has been renamed to llm collections. #229
  • Fixed bug where llm embed -c option was treated as a filepath, not a string. Thanks, mhalle. #263

0.10a0

05 Sep 06:43
dffbac4
Compare
Choose a tag to compare
0.10a0 Pre-release
Pre-release
  • New llm chat command for starting an interactive terminal chat with a model. #231
  • llm embed-multi --files now has an --encoding option and defaults to falling back to latin-1 if a file cannot be processed as utf-8. #225

0.9

04 Sep 02:40
Compare
Choose a tag to compare
0.9

The big new feature in this release is support for embeddings.

Embedding models take a piece of text - a word, sentence, paragraph or even a whole article, and convert that into an array of floating point numbers. #185

This embedding vector can be thought of as representing a position in many-dimensional-space, where the distance between two vectors represents how semantically similar they are to each other within the content of a language model.

Embeddings can be used to find related documents, and also to implement semantic search - where a user can search for a phrase and get back results that are semantically similar to that phrase even if they do not share any exact keywords.

LLM now provides both CLI and Python APIs for working with embeddings. Embedding models are defined by plugins, so you can install additional models using the plugins mechanism.

The first two embedding models supported by LLM are:

See Embedding with the CLI for detailed instructions on working with embeddings using LLM.

The new commands for working with embeddings are:

  • llm embed - calculate embeddings for content and return them to the console or store them in a SQLite database.
  • llm embed-multi - run bulk embeddings for multiple strings, using input from a CSV, TSV or JSON file, data from a SQLite database or data found by scanning the filesystem. #215
  • llm similar - run similarity searches against your stored embeddings - starting with a search phrase or finding content related to a previously stored vector. #190
  • llm embed-models - list available embedding models.
  • llm embed-db - commands for inspecting and working with the default embeddings SQLite database.

There's also a new llm.Collection class for creating and searching collections of embedding from Python code, and a llm.get_embedding_model() interface for embedding strings directly. #191

0.9a1

04 Sep 01:28
Compare
Choose a tag to compare
0.9a1 Pre-release
Pre-release

0.9a0

02 Sep 17:54
1cd4596
Compare
Choose a tag to compare
0.9a0 Pre-release
Pre-release
  • Alpha release of embeddings support. #185

0.8.1

01 Sep 03:35
Compare
Choose a tag to compare
  • Fixed bug where first prompt would show an error if the io.datasette.llm directory had not yet been created. #193
  • Updated documentation to recommend a different llm-gpt4all model since the one we were using is no longer available. #195

0.8

21 Aug 06:55
Compare
Choose a tag to compare
0.8
  • The output format for llm logs has changed. Previously it was JSON - it's now a much more readable Markdown format suitable for pasting into other documents. #160
    • The new llm logs --json option can be used to get the old JSON format.
    • Pass llm logs --conversation ID or --cid ID to see the full logs for a specific conversation.
  • You can now combine piped input and a prompt in a single command: cat script.py | llm 'explain this code'. This works even for models that do not support system prompts. #153
  • Additional OpenAI-compatible models can now be configured with custom HTTP headers. This enables platforms such as openrouter.ai to be used with LLM, which can provide Claude access even without an Anthropic API key.
  • Keys set in keys.json are now used in preference to environment variables. #158
  • The documentation now includes a plugin directory listing all available plugins for LLM. #173
  • New related tools section in the documentation describing ttok, strip-tags and symbex. #111
  • The llm models, llm aliases and llm templates commands now default to running the same command as llm models list and llm aliases list and llm templates list. #167
  • New llm keys (aka llm keys list) command for listing the names of all configured keys. #174
  • Two new Python API functions, llm.set_alias(alias, model_id) and llm.remove_alias(alias) can be used to configure aliases from within Python code. #154
  • LLM is now compatible with both Pydantic 1 and Pydantic 2. This means you can install llm as a Python dependency in a project that depends on Pydantic 1 without running into dependency conflicts. Thanks, Chris Mungall. #147
  • llm.get_model(model_id) is now documented as raising llm.UnknownModelError if the requested model does not exist. #155

0.7.1

19 Aug 21:08
Compare
Choose a tag to compare
  • Fixed a bug where some users would see an AlterError: No such column: log.id error when attempting to use this tool, after upgrading to the latest sqlite-utils 3.35 release. #162

0.7

12 Aug 16:41
a802bfb
Compare
Choose a tag to compare
0.7

The new Model aliases commands can be used to configure additional aliases for models, for example:

llm aliases set turbo gpt-3.5-turbo-16k

Now you can run the 16,000 token gpt-3.5-turbo-16k model like this:

llm -m turbo 'An epic Greek-style saga about a cheesecake that builds a SQL database from scratch'

Use llm aliases list to see a list of aliases and llm aliases remove turbo to remove one again. #151

Notable new plugins

Also in this release

  • OpenAI models now have min and max validation on their floating point options. Thanks, Pavel Král. #115
  • Fix for bug where llm templates list raised an error if a template had an empty prompt. Thanks, Sherwin Daganato. #132
  • Fixed bug in llm install --editable option which prevented installation of .[test]. #136
  • llm install --no-cache-dir and --force-reinstall options. #146

0.6.1

24 Jul 15:55
ce21cbd
Compare
Choose a tag to compare
  • LLM can now be installed directly from Homebrew core: brew install llm. #124
  • Python API documentation now covers System prompts.
  • Fixed incorrect example in the Prompt templates documentation. Thanks, Jorge Cabello. #125