rw-book-cover

Metadata

Highlights

  • Today, we’re launching Unsloth Studio (Beta): an open-source, no-code web UI for training, running and exporting open models in one unified local interface. (View Highlight)
  • Search and run GGUF and safetensor models with self-healing tool calling / web search, auto inference parameter tuning, code execution and APIs. Upload images, docs, audio, code files. (View Highlight)
  • Upload PDF, CSV, JSON docs, or YAML configs and start training instantly on NVIDIA. Unsloth’s kernels optimize LoRA, FP8, FFT, PT across 500+ text, vision, TTS/audio and embedding models. (View Highlight)
  • Does Unsloth collect or store data? We do not collect usage telemetry. We only collect the minimal hardware information required for compatibility, such as GPU type and device (e.g. Mac). Unsloth Studio runs 100% offline and locally. (View Highlight)
  • Does Studio only support LLMs? No. Studio supports a range of supported transformers compatible model families, including text, multimodal models, text-to-speech, audio, embeddings, and BERT-style models. (View Highlight)
  • Do you need to train models to use the UI? No, you can just download any GGUF or model without fine-tuning any model. We’re working hard to make open-source AI as accessible as possible. Coming next for Unsloth and Unsloth Studio, we’re releasing official support for: multi-GPU, Apple Silicon/MLX, AMD, and Intel. Reminder this is the BETA version of Unsloth Studio so expect a lot of announcements and improvements in the coming weeks. We’re also working closely with NVIDIA on multi-GPU support to deliver the best and simplest experience possible. (View Highlight)