Perhaps the right solution is to augment the /json tool to also validate against a schema, and then this would be available to all Assistant models, and not just those using Anthropic's backend?
At any rate, I have a custom assistant whose system prompt is basically "Take the user's input and translate it into JSON according to this schema: ..." It passes values that fail the schema I'd say 10-20% of the time. It's for a low-stakes application, which throws an error whenever it gets passed bad input, but all that to say this is actually one of my favorite use cases for LLMs, and I'd love an ability to leverage some schema checking to make it more reliable, wherever that happens in the stack. But I was reading today that Anthropic announced it, so I thought I'd open a feature request.