Replies: 2 comments 2 replies
-
If it's related to number of docs and chunks, then this is something I solved in https://github.com/h2oai/h2ogpt . h2oai/h2ogpt@58a17a8#diff-fb9fd206ee33cd1324b9732364e0ecdf19e8d693c0c11f6181fc1e8f1f96329eR550-R553 i.e. had to deal with chroma bug langchain-ai/langchain#1946 It's a simple work-around that can be applied to PrivateGPT or other such repos. FAISS has no such issue, but is not persistent. I didn't try weaviate. |
Beta Was this translation helpful? Give feedback.
2 replies
-
LangChain dependency updated in 1590c58 Thanks for your advise! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I notice that if I ingest a 60KB PDF and query it the answers are very accurate.
However, the more data I ingest (similar topics, just more data) the answers become more and more useless, up to the point where questions it had answers on a low data set, it suddenly has no more answers on a larger dataset and comes up with "I do not have an answer to the question"
Why might that be? This is quite "a showstopper" since the whole point of getting a broader "knowledge" ingested, is increasing the data.
Beta Was this translation helpful? Give feedback.
All reactions