Why is Bubble returning IDs that do not exist?

My flow returned two errors. Both of them say that I am trying to PATCH with IDs to Bubble that do not exist. And yet, for some reason, on the Bubble import, they do exist. I feel like I’m going crazy.

Here’s one error on the Send to Bubble:

{
  "body": {
    "message": "Missing object of type alignments: object with id 1605507237304x827471006854594300 does not exist",
    "status": "MISSING_DATA"
  },
  "statusCode": 404
}

If you try to open the URL https://app.subaligner.com/version-test/api/1.1/obj/alignments/1605507237304x827471006854594300 it indeed does not exist.

And yet, somehow magically, on my GET request from Bubble, it does…52 TIMES!

What am I missing here. How is Parabola able to import rows with IDs that do not exist? Even more, how did it find 52 of them??

Now my database is screwed up because this flow is not working properly. This is a flow that I have been struggling with for…ever. I don’t know if it is just because I lack the technical experience, but I’m tired of this not working and I wish there was an easier solution. Do you ever feel like you are stuck in purgatory, just digging holes and filling them again until the end of time?

Hi @Nathan_Lively,

Definitely understand how API errors can be disappointing. I wanted to let you know that we’ve provided 4 extra credits to you, to account for those used during this flow’s failed run with errors. We’ve begun looking into it and will update you on this post tomorrow morning with a few possible solutions and workarounds. If you have any other questions you’d like assistance with tomorrow morning, feel free to post them on this Community thread or reach out to help@parabola.io. Thanks for your patience with this. :slightly_smiling_face:

1 Like

Thanks! I’m sure it’s user error, but besides reading the documentation and I’m not sure where else to look.

1 Like

I just ran a Remove Duplicates action on everything pulled in from Bubble, based on the ID number. Parabola says that out of 16,800 records that 267 are unique. How is this possible?

I decided to run another test to see if I could delete all of those duplicates. It failed.

It’s trying to delete rows with IDs that do not exist. So confusing.

{
  "body": {
    "message": "Missing object of type alignments: object with id 1605507218684x283528512282246400 does not exist",
    "status": "MISSING_DATA"
  },
  "statusCode": 404
}```

@Nathan_Lively for clarity, are GET and PATCH all of the API request types you have in this flow, or are there also other types? For example, in addition to updating rows, are you also deleting rows? Depending on how your flow’s structured, some rows may be deleted before they can be updated.

The records duplication is most-likely due to pagination and rate limiting settings in the import step Pull from an API. What tends to happen is instead of pulling in only unique pages, it’ll pull in the same page(s) as many times as specified in the API import step’s Rate Limiting field of Maximum pages to fetch. Do you have a total amount of unique records you’re expecting to pull in?

Something to note is that if you have a later API step with the request type of DELETE, after this flow’s initial API import created duplicate rows from pagination settings, then another error can occur since it would delete the ID in the first row it appears in but then errors deleting it in repeated rows it’s in. (This may be where that error of rows with IDs that don’t exist is coming from – the first row they existed in were deleted, and no longer exist in the later duplicate rows.)

If you’d like to share a direct link to this flow, feel free to reach out to us at help@parabola.io and we can take a further look into this together.

1 Like

Thanks @Adeline. First I’ll answer your questions related to the test flow I set up (Bubble Delete Duplicates) and then I’ll answer them in relation to the original one that started this thread. You can ignore the test flow if you’d like.

what are all of the API request types you’re doing in this flow (GET and PATCH, or more)?

Bubble Delete Duplicates flow: GET, DELETE
Bubble patch new Big CSV flow: GET, PATCH, POST, DELETE

in addition to updating rows, are you also deleting rows?

Bubble Delete Duplicates flow: only deleting
Bubble patch new Big CSV flow: updating, posting new, and deleting rows

The records duplication is most-likely due to pagination settings in the import step Pull from an API .

I wondered about this. I never have an exact multiple of 100, but that it always says that’s how many it is pulling in.

What tends to happen is instead of pulling in only unique pages, it’ll pull in the same page(s) as many times as specified in the API import step’s Rate Limiting field of Maximum pages to fetch .

I’m not sure what to do about that. I always set the Maximum Pages to Fetch to total rows in bubble / 100, then rounded up to make sure I get all the records. Is there a better way in order to avoid duplicates?

Do you have a total amount of unique records you’re expecting to pull in?

Yes, I always check that first. Right now I am going through and deleting all of them manually because I couldn’t get the Parabola flow to work, but when I started, it was something like 9,867 rows.

the first row they existed in were deleted, and no longer exist in the later duplicate rows.)

I have considered that this may be a problem, but I don’t know how to fix it.

If you’d like to share a direct link to this flow, feel free to reach out to us at help@parabola.io and we can take a further look into this together.

Done!

1 Like

This does help me understand the problem with DELETE, but what about PATCH? In my flow I use Remove Duplicate Rows before the PATCH. Seems like it shouldn’t have the same problem.

@Nathan_Lively great, thanks for the information! We’ve received your email and will continue troubleshooting over there through that channel, but wanted to leave an ending note here of a useful flow build. This structure helps one find the right API import pagination and rate limiting settings without duplicating row values:

3 Steps: Pull from an API -> Count by group -> Filter rows


Within the Pull from an API step’s settings Pagination, please select the Offset and limit type with the following:

  • Offset key = cursor
  • Offset starting value = 0
  • Increment each page by = 100 (for Offset and limit type, this is the same as defined in the Limit value field. Think of this as the number of rows per page.)
  • Limit key = limit
  • Limit value = 100

In this API import step’s settings Rate Limiting, please enter the following:

  • Maximum requests per minute = 1000 (limit seen in Bubble’s API docs)
  • Maximum pages to fetch = #/100 (# is the total expected unique records divided by the limit value. May vary depending on expected amount of unique records to pull.)

Hope that clarifies things for anyone viewing this thread and that you’re able to double-click on the screenshot photos to enlarge/zoom into the other steps’ settings. Please note: the listed API import settings’ entries above are for this specific Bubble use case. The exact Pagination and Rate Limiting field entries may vary depending on the API and use case. In Offset and limit type, the limit value you define will be the same value you’d enter in Increment each page by. For the above case, if limit value is 100, then increment each page by is 100 since 100 rows = 1 page incremented by.

1 Like

Thanks @Adeline! Ok, I’m a bit confused.

This structure helps one find the right API import pagination and rate limiting settings

It looks like this structure removes duplicates, correct? It doesn’t calculate the settings for me.

  • Increment each page by = 100

Now this is suuuuper confusing and is probably the source of my duplicates. I have mine set to 1, not 100. I thought this value meant that it would jump from page 1 to page 100, then 200. No?
From How to use the API integration in Parabola

The Increment Pagination Value is the number of pages to advance to. A value of 1 will fetch the next page. A value of 10 will fetch every tenth page.

I don’t want the next 100th page. I want the next page.

Should be total records / 100, right?

I tried these recommended setting. I keep getting this error.

2020-12-16 at 18.33

"<html>\n<head><title>524 Origin Time-out</title></head>\n<body bgcolor=\"white\">\n<center><h1>524 Origin Time-out</h1></center>\n<hr><center>cloudflare-nginx</center>\n</body>\n</html>\n"

Here are the settings. I’m still suspicious of that Increment each page by value.

@Nathan_Lively yes, sorry for the confusion – the previously noted settings were to help one with setting up the API import and connecting to the API initially, since that step can take a lot of time to load with corrected settings depending on the dataset size. We’ve updated the screenshot and instructions. We’ll get back to you as soon as possible. Please note: due to the volume we receive across multiple channels, this may take a few hours. Thanks again for your patience with this. :slightly_smiling_face:

1 Like

Hey Nathan,

Closing the loop here. The 524 error is a timeout on Bubble’s end. Most likely, it cannot paginate all of your results within a specified time frame because the request is too large.

I might suggest splitting up your API call into multiple requests, and use the Offset and limit based pagination @Adeline mentioned above.

If you have 87 pages to return, you could have 3 imports and set the Maximum pages to fetch for each import to something like this:

  1. 30 max pages
  2. 30 max pages
  3. 27 max pages

The Offset starting value for each import could be adjusted to bring in rows that start where the last import left off.

  1. 0 offset
  2. 3,000 offset
  3. 6,000 offset

I’ve sent you a direct email with some follow-up information, so feel free to reply to that and we can get you pointed in the right direction!

2 Likes