Vlad I am currently using it. It worked, but would be more efficient to polish it a bit more? for example, the vision llm should allow prompt and picture to be uploaded at the same time. I always need to wait for the research assistant to output something I probably don't want before I type in additional prompt. And there is no way to add a picture&document in the middle of a research. The previous conversion will reset once a pic is uploaded.
So for Ultimate plan features, probably adding a working MLLM would be the best? No need for choosing, no need for a lot of conversion, just one place for a whole research project (and that's why storing previous conversation is quite crucial).
In my personal opinion it would be best to enable ultimate plan user to have several custom assistant too. Currently only one is included?