• (View Highlight)
  • With version 1.25 we’re announcing an integration with OctoAI which will make it even easier for users to access many open source embedding and language models such as Llama3-70b, Mixtral-8x22b and more. (View Highlight)
  • We are releasing two integrations: text2vec-octoai and generative-octoai that integrate Weaviate and the OctoAI service. OctoAI provides hosted inference services for embedding models and large language models. (View Highlight)
  • The multi2vec-palm integration model is an update to v1.24 that lets you use Google’s hosted embedding models to embed multimodal data. (View Highlight)
  • Prior to the release of this model if users wanted to embed multimodal data they’d have to self-host the embedding model on their own compute but with multi2vec-palm building multimodal applications is easier than ever. (View Highlight)
  • Using Google’s multimodal embedding model you can now embed text, images and videos all into the same vector space and perform cross-modal retrieval! (View Highlight)
  • Standalone vector searches use the nearText and nearVector similarity operators to fine tune search results. Since hybrid search combines the strengths of vector search and keyword search, many of you asked for this feature in hybrid search too. It’s here! The 1.25 release adds the similarity operators to the vector component of hybrid search. (View Highlight)