Hi @marenas, you are correct that SnapGPT/ChatGPT had a hallucination in this situation. Most if not all generative AI solutions can have hallucinations and we’re working on some ways to reduce and/or eliminate these situations as we keep training the models. In the shorter term, we are also exploring ways to empower SnapGPT users to quickly “trust but verify,” as the saying goes, in order to fact-check a response.
One way we are testing internally is to include sources at the bottom of the SnapGPT response. Do you think that would be helpful and/or do you have any other ideas we could consider?