Wikipedia

Search results

Wednesday, April 12, 2023

Show HN: GPTCache – Redis for LLMs https://ift.tt/DYZRFJ8

Show HN: GPTCache – Redis for LLMs Hey folks, As much as we love GPT-4, it's expensive and can be slow at times. That's why we built GPTCache - a semantic cache for autoregressive LMs - atop the vector database Milvus and SQLite. GPTCache provides several benefits: 1) reduced expenses due to minimizing the number of requests and tokens sent to the LLM service 2) enhanced performance by fetching cached query results directly 3) improved scalability and availability by avoiding rate limits, and 4) a flexible development environment that allows developers to verify their application's features without connecting to LLM APIs or network. Come check it out! https://ift.tt/X6EPYV0 https://ift.tt/OISxfh9 April 12, 2023 at 10:44PM

No comments:

Post a Comment