I’m not sure if this is how @hersh@literature.cafe is using it, but I could totally see myself using an LLM to check my own understanding like the following:
- Read a chapter
- Read the LLM’s summary of the chapter
- Make sure I can understand and agree or disagree with each part of the LLM’s summary.
Ironically, this exercise works better if the LLM “hallucinates”; noticing a hallucination in its summary is a decent metric for my own understanding of the chapter.
Ooooh, that’s a good first test / “sanity check” !
May I ask what you are using as a summarizer? I’ve played around with locally running models from huggingface, but never did any tuning nor straight-up training “from scratch”. My (paltry) experience with the HF models is that they’re incapable of staying confined to the given context.