OpenAI Allows Developers Better Control Over ChatGPT Responses

Internet

OpenAI announced new changes to its File Search system last week, allowing more control to developers when asking the artificial intelligence (AI) chatbots to pick responses. The improvement has been made to the ChatGPT’s application programming interface (API) and will let developers not only check the behaviour of the chatbot’s response retrieval method, it also let them fine-tune its behaviour. This way, developers can ensure that only the desirable responses are picked. Notably, an earlier report claimed that the company is planning to launch another AI model dubbed ‘Strawberry’ that can improve ChatGPT’s mathematics and logical reasoning.

OpenAI Improves ChatGPT API for Developers

The AI firm announced the changes to the API in a post on X (formerly known as Twitter). In essence, the upgrade improves the controls for File Search in the Assistant API. It allows developers to check the results picked by the chatbot and make further adjustments as per their requirements.

APIs are different from the consumer-focused ChatGPT web and apps. While the interface end-users see is fine-tuned by OpenAI, and is set to behave in a certain way, developers who either build internal tools for companies or integrate the chatbot into various apps and software require more freedom.

This could be because while the public version of ChatGPT is configured for general purposes, the API version is used for one specific function. To excel at that, users require the AI to not make any errors, and return responses that are of the highest quality.

So far, developers did not have an option to fine-tune the API to make the chatbot generate relevant responses for the particular use cases, however, with the new control options, this will change. OpenAI, in its support page, highlighted how this will work.

Developers can now inspect the File Search responses. The File Search tool in the Assistant API picks the answers that it thinks are relevant for a particular query. However, now developers will be able to check the responses the AI picks and test the information it generated in past runs. This information is said to help them provide more insight into the tool’s workings.

Further, developers can adjust the settings of the result ranker which is used to search the information to generate the responses. By picking a ranking between 0.0 and 1.0, they can control the information that the AI opts for and those it ignores.