[✅ Solution] Action execution failed due to timeout. Avoid long runtimes for your workflow

I’m running a workflow where I got 1000 - 2000 records witch needs to be updated and I get error: Action execution failed due to timeout. Avoid long runtimes for your workflow.
Did anyone get the same error before?
Does anyone know how long runtime is setup to run in this kind of cases?

Hi @tomaz

I wrote a whole reply saying how I had had it and not looked into and then I remembered I had seen this somewhere:

Tape applies limits to all executed workflow automations regarding utilized computation power and time.

  • The maximum time a flow can run is currently 3 minutes
  • Maximum number of actions consumed for a single run: 1000 (One thousand)
  • Memory & CPU limitations apply
  • There is a limit for parallel async operations (avoid heavy parallel HTTP operations and perform them sequentally instead)

Flows that exceed any of the above limits fail with a proper error message. Split your work into multiple flows or avoid heavy computations, e.g. for large amounts of records.

Limitations | Tape Developers (tapeapp.com)

I have left my original waffle in anyway as I am not sure the ‘Limitations’ fit completely and certainly the error description doesn’t.

Yes, I get it sometimes and have been meaning to look into it, I have a nightly automation that runs through some checks on only 100 or so records (okay it’s quite a lot of checks) and just every so often it fails with that error most of the time it works fine. Hence, it is a low priority for me to look into it. Also if I trigger it manually it will again fail sometimes but it fails almost immediately so I am not sure how it is determining a long runtime.

I think I have had it when an automation has been awaiting a response from an API mainly GPT before as well but that one makes more sense as when I have seen it happen there does seem to have been quite a delay.

I don’t think that helps you much but I was thinking of fixing mine by splitting my nightly checks into two sets, if you want a work round then maybe you could search for records without an updated date and limit them to 500 records or the maximum that works and then run that however many times you need to get through them all, but as I say it seems a little random as when that error crops up.

@Jason of course this is helpful.
In meantime I setup flow to run every hour until all the records will be updated.
Thanks for you quick reply.