rw-book-cover

Metadata

Highlights

  • Astonishing first token output speed! FastVLM-0.5B is 85x faster than LLaVA-OneVision. FastVLM-7B (with Qwen2) is 7.9x faster than Cambrian-1-8B (at similar accuracy). (View Highlight)
  • Astonishing first token output speed! FastVLM-0.5B is 85x faster than LLaVA-OneVision. FastVLM-7B (with Qwen2) is 7.9x faster than Cambrian-1-8B (at similar accuracy). (View Highlight)
  • Small model size, easier deployment. FastVLM-0.5B is 3.4x smaller than LLaVA-OneVision. Ideal for on-device use like iPhone, iPad, Mac. (View Highlight)
  • Small model size, easier deployment. FastVLM-0.5B is 3.4x smaller than LLaVA-OneVision. Ideal for on-device use like iPhone, iPad, Mac. (View Highlight)
  • Perfectly adapted to the iOS/Mac ecosystem, empowering edge AI applications. (View Highlight)
  • Perfectly adapted to the iOS/Mac ecosystem, empowering edge AI applications. (View Highlight)
  • For convenience running on Apple Silicon devices, we provide models in pre-converted formats: (View Highlight)
  • For convenience running on Apple Silicon devices, we provide models in pre-converted formats: (View Highlight)