|
Qdrant |
Default
Ever since the data science community discovered that vector search significantly improves LLM answers, various vendors and enthusiasts have been arguing over the proper solutions to store embeddings.
Some say storing them in a specialized engine (aka vector database) is better. Others say that it’s enough to use plugins for existing databases.
Here are just a few of them.
This article presents our vision and arguments on the topic . We will: