You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Issues Encountered
1, Incomplete Summarization: The summarization output seems to be incomplete. At times, I don't get any content at all.
2. Handling Large HTML Files: My HTML files are quite large, exceeding the maximum token limit of 8192. Despite using SimpleNodeParser with chunking, it appears to only process the first 800 tokens of the document.
Questions
What are the best practices for summarizing large HTML files or strings using LlamaIndex TS?
How can I ensure that the summarization captures the entire content of the document, given the token limitations?
Are there any specific settings or configurations in SimpleNodeParser or other components that I should adjust to improve the summarization results?
Thank you for your assistance!
The text was updated successfully, but these errors were encountered:
I am currently working with LlamaIndex TS to summarize large HTML files or strings. Below is the code I am using:
Issues Encountered
1, Incomplete Summarization: The summarization output seems to be incomplete. At times, I don't get any content at all.
2. Handling Large HTML Files: My HTML files are quite large, exceeding the maximum token limit of 8192. Despite using SimpleNodeParser with chunking, it appears to only process the first 800 tokens of the document.
Questions
Thank you for your assistance!
The text was updated successfully, but these errors were encountered: