Hi @insiderperks,
I have kept an eye on this since I published the first version of the plugin actually 🙂 This adds an extra level of technical difficulty in the plugin (it does’t mean it’s difficult, but more chances of bugs, issues, or spending a lot of time fixing the implementation as OpenAI is still beta, models are changing and new ones being added, etc).
I also wanted to analyze how to implement it the best way possible so that it’s just an additional useless feature but really something that brings a lot to the users/visitors.
Also wanted to find a way to avoid using pinecone and the extra costs.
I have been playing with it for a little while, and with the new model (which is much faster), I am less reticent to add a few more quick requests to get a perfect response.
If you have experience with it, don’t hesitate to contact me privately, or on my Discord 🙂