GPT5 Tool Calls Improvements


Unless you are under a rock you have probably been bombarded with news of GPT5’s release.

I already mentioned it briefly in another upcoming post this week but I wanted to highlight some things I found interesting like their improvements to Tool Calls from their launch video ( around the 44:10 mark).

They trained the model to accept tool calls NOT wrapped in JSON because it makes it easier for the model to formulate a valid response. I am curious how that will go. Will whatever is calling the model be able to read a more free form format?

They are letting you define grammar which the models should respect. This is basically just a schema definition you can pass in that the model will try to adhere to with its response.

They added in Tool Call “preambles” that explains its plan before it calls the tool which is awesome for debugging. I hope there is a setting to turn that off to save on tokens if you don’t need to debug.

I got to go hands-on with GPT5 when they teamed up with Cursor to give you free limited usage of the model.

The model was actually marginally better than some of the others I saw, but nothing ground breaking. I don’t have any solid bench marks but it seemed to get stuck in the corner a bit less than other models.

The main benchmark I found impressive about GPT5 was the price. As I previously have predicted it looks like they are focused less on making massive improvements, at least in their flagship models, and more on decreasing computational costs to be more competitive in the market.