[ad_1]
Hi @ai6667,
I am handling this dynamically now, to avoid going over the maximum amount allowed by the model. I wanted to see if that works well, and it does.
So basically, now you can use the standard max_tokens parameter and that will work 🙂
That said, I have also implemented the fact that only the last 15 questions/answers are kept and used in the prompt.
That number of 15 could also be changed. I forgot what we said before but in this case now, it could be “conversation_buffer” and 15 by default. What do you think?
