SIGN IN SIGN UP
mudler / LocalAI UNCLAIMED

:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more. Features: Generate Text, MCP, Audio, Video, Images, Voice Cloning, Distributed, P2P and decentralized inference

TAGS

20 tags
v4.0.0

fix(ui): do not let from button to trigger Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

v3.12.1

chore: :arrow_up: Update ggml-org/llama.cpp to `ba3b9c8844aca35ecb40d31886686326f22d2214` (#8613) :arrow_up: Update ggml-org/llama.cpp Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>

v3.12.0

fix(ui): pass by needed values to unbreak model editor Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

v3.11.0

chore: :arrow_up: Update ggml-org/llama.cpp to `8872ad2125336d209a9911a82101f80095a9831d` (#8448) :arrow_up: Update ggml-org/llama.cpp Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>

v3.10.1

feat(qwen-tts): add Qwen-tts backend (#8163) * feat(qwen-tts): add Qwen-tts backend Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Update intel deps Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Drop flash-attn for cuda13 Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

v3.10.0

chore: drop neutts for l4t (#8101) Builds exhausts CI currently, and there are better backends at this point in time. We will probably deprecate it in the future. Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

v3.9.0

chore(model gallery): :robot: add 1 new models via gallery agent (#7712) chore(model gallery): :robot: add new models via gallery agent Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>

v3.8.0

fix: Initialize sudo reference before its first actual use (#7360)

v3.7.0

chore: :arrow_up: Update ggml-org/llama.cpp to `31c511a968348281e11d590446bb815048a1e912` (#6970) :arrow_up: Update ggml-org/llama.cpp Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>

v3.6.0

chore(model gallery): add ibm-granite_granite-4.0-micro (#6376) Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

v3.5.4

docs: :arrow_up: update docs version mudler/LocalAI (#6315) :arrow_up: Update docs version mudler/LocalAI Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>

v3.5.3

fix(diffusers): fix float detection (#6313) There was apparently an oversight, this fixes the float/int detection Fixes: https://github.com/mudler/LocalAI/issues/6312 Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>

v3.5.2

chore: :arrow_up: Update ggml-org/llama.cpp to `0320ac5264279d74f8ee91bafa6c90e9ab9bbb91` (#6306) :arrow_up: Update ggml-org/llama.cpp Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>

v3.5.1

chore(model gallery): add websailor-7b (#6300) Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

v3.5.0

chore(ci): fixup release pipeline Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

v3.4.0

chore: :arrow_up: Update ggml-org/llama.cpp to `be48528b068111304e4a0bb82c028558b5705f05` (#6012) :arrow_up: Update ggml-org/llama.cpp Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>

v3.3.2

chore(build): Rename sycl to intel (#5964) Signed-off-by: Richard Palethorpe <io@richiejp.com>

v3.3.1

chore: :arrow_up: Update ggml-org/llama.cpp to `daf2dd788066b8b239cb7f68210e090c2124c199` (#5951) :arrow_up: Update ggml-org/llama.cpp Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>

v3.3.0

fix(backend gallery): intel images for python-based backends, re-add exllama2 (#5928) chore(backend gallery): fix intel images for python-based backends Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

v3.2.3

fix(cuda): be consistent with image tag naming (#5916) Signed-off-by: Ettore Di Giacinto <mudler@localai.io>