Back

Performance degradation - actions restarted

June 05, 2025, 6:10 AM UTC Resolved after 2h 18m

Updates

Resolved

All the affected actions were restarted and are no longer failing to the error caused by misconfiguration. Contact our support if you need any further assistance, but no human action should be needed on the user's side. Thank you for your patience during the incident.

June 5 8:26 AM UTC
Monitoring

We rolled-back our services back to stable version. Identified sources/flows affected by this misconfiguration and are gradually restarting them. In couple of minutes all of the affected actions should be restarted. In the action logs you can see, if the action was affected or not (3 consecutive failures followed by another live/broken log). We will keep you informed until the final resolution.

June 5 6:52 AM UTC
Identified

The action failures caused by this incidents have in common the error messages: source union: EOF or Extraction failed: too many iterations: 300. If any of your action fails on one of these errors, please ignore it. We will restart such actions on our side.

June 5 6:47 AM UTC
Identified

We have identified a service degradation affecting some sources and data flows. The root cause has been traced to an application misconfiguration. This misconfiguration may have caused certain sources or flows to become temporarily non-functional.

Our team is actively working to resolve the issue. Actions that have failed due to the misconfiguration are being identified, and once the fix is applied, these will be restarted automatically to restore normal operations.

We appreciate your patience and will provide updates as the situation progresses.

June 5 6:34 AM UTC
Investigating

We are currently experiencing a performance degradation affecting extractions. Users may notice slower response times and delays in processing requests. Our engineering team is actively investigating the root cause and working to restore full performance as quickly as possible.

June 5 6:14 AM UTC