New OpenAI model and updating Workflows

A couple of quick tips:

  1. OpenAI has just released a new model gpt-4o-mini this has nearly all of the power of GPT-4o but at a fraction of the cost of GPT-3.5.

To give you an example, getting a summary of an average email would probably use something like 2,603 tokens in and 1,372 out which equates to a cost of:

GPT-4o = $0.033595
GPT-3.5 = $0.0033595
GPT-4o Mini = $0.00121365

I know those costs are so small however if each email is AI Spam checked, categorised, summarised and a draft reply written (if needed) and you then multiply that up by say 1000 emails a month the cost savings for something better start to become more relevant:

GPT-4o = $134.38
GPT-3.5 = $13.438
GPT-4o Mini = $4.8546

  1. That brings us to updating your Tape Workflows, if you only have one AI automation then great you go in and change it, if you have 5 it gets more annoying what if you have 50 or more?

I would suggest a ‘variables’ app or whatever you would like to call it, just an app hidden away to hold information that you want to use across multiple workflows:

  1. Search for the relevant record holding the data you want to pull in
  2. Add the data to a variable
  3. Call the variable in your call to OpenAI
  4. Enjoy the updated model
5 Likes

Fantastic post! I recently started using OpenAi with Tape, and it is such a time saver (and very easy to implement).

A question about the code above:

Is response_format: { "type": "json_object", needed in the script? I don’t have that in my OpenAI script, and it is working without issues.

1 Like

Good question and well-spotted :slight_smile:

It was brought in at the end of last year with gpt-3.5-turbo-1106 and gpt-4-1106-preview it is supposed to ensure that the response is always a valid JSON object and whilst you don’t ‘need’ it it does increase the chances of receiving what you want.

Important: You must still Include an instruction in your system or user message for the model to produce JSON output

For complex tasks, I also often include an example of the output I am looking for, for example:

{
  "title": "{new_project_title_if_updated}",
  "projectDescription": "{detailed_project_description}",
  "deliverables": [
    {
      "title": "{deliverable_1_title}",
      "description": "{deliverable_1_description}",
      "actions": [
        "{deliverable_1_action_1}",
        "{deliverable_1_action_2}",
        "{deliverable_1_action_3}",
        ...
      ]
    },
    {
      "title": "{deliverable_2_title}",
      "description": "{deliverable_2_description}",
      "actions": [
        "{deliverable_2_action_1}",
        "{deliverable_2_action_2}",
        "{deliverable_2_action_3}",
        ...
      ]
    },
    ...
  ]
}

Now I do find that with the later models, they seem to like giving you the response enclosed so I tend to have a little bit of code that strips all that out like:

// console.log('content', content)
// Remove leading "```json\n" if present
if (content.startsWith("```json\n")) {
    content = content.substring(8); // Remove the first 8 characters
}

// Remove trailing "```" if present
if (content.endsWith("```")) {
    content = content.substring(0, content.length - 3); // Remove the last 3 characters
}
content = JSON.parse(content);

As an extra tip did you know that the OpenAI API seems to have become a standard so you can basically use the same code with:

They are not exactly the same (for example Perplexity doesn’t really support system prompts in its online models, yet) for everything but they are close enough for a lot of tasks.

3 Likes