OpenAI has released an option to tune GPT-3.5 Turbo. Starting Wednesday (23), developers can customize the model to better meet application usage needs, thus creating more efficient and straightforward AI. The same feature will come to GPT-4.0 in the third quarter.
GPT-3.5 is a powerful generative model, but since its release, developers and companies have demanded greater freedom to customize it for specific tasks. According to OpenAI, this capability is finally ready.
In internal testing of the tool, OpenAI noted that the GPT-3.5 Turbo has better stability, more consistent output, and customized response tones. In addition, it was possible to shorten the output, while preserving the chatbot’s performance.
According to OpenAI, the modified models can handle up to 4,000 tokens, double the capacity of previous models. This allowed to reduce the volume of input prompts by up to 90%, as instructions are now executed directly in the form. As a result, this creates faster and cheaper API calls.
How is the GPT-3.5 Turbo tuned?
GPT-3.5 Turbo is optimized through a series of phases, including retraining with selected data. A developer almost needs to set up AI from scratch, leveraging OpenAI’s previous model base for the specific context of the customer.
The process includes providing data and sample responses, training the model, and finally, implementing the AI.
OpenAI plans
Soon, OpenAI will also provide the ability to configure GPT-4.0, which is the model that runs the paid version of ChatGPT. In the future, the company also intends to implement support for model optimization through function calls.
“Web geek. Wannabe thinker. Reader. Freelance travel evangelist. Pop culture aficionado. Certified music scholar.”