JSON parsing to columns and not rows

Hi, I am pulling JSON via the API pull and the format is not great so it creates 3000 columns. I am able to flip table and clean as needed for my results. However, if I try to pull 2+ request I exceed maximum allowable columns.

Ideally I would like to save RAW JSON response somewhere and post process via regex to fix issues so it reads into rows and then I could batch process all the files without running into column issues.

Any ideas or work around would be appreciated. Is there a subroutine pattern I can use?

Thanks,
Anthony

Hi @anthony, welcome to Parabola

Is there by chance a Response JSON configured on your Pull from API step? If so, could you try removing this setting so the data set is condensed when being brought into the Flow?

If there is no Top Level Key on your API pull, another option may be to utilize multiple API steps to pull in the data. After pulling in the data, you can use the Stack Tables or Combine Tables step to connect the data sets.

Let us know if you have any additional questions getting this set up!
Ayana

The confusion on my part is the api tool pulls the json and allows me to run regex to clean up format as needed to send to a table as needed.

However the enhanced api tool is fed 2 ids and it automatically parses each json response and that is the problem - if it just held the top level json per response like the api tool i could regex and fix format of each response before sending to json expand tool

Thanks for providing those details! Could you confirm if there are any settings enabled on the Response JSON section? That may help with condensing the data when it’s first brought into the flow.

Best,
Ayana

I get the same results with it selecting top level as well as if it is un-selected in enrich with API.

Apologies for the confusion - were you hoping to utilize the Enrich with API step or the Pull from API step? Also could you explain which API Tool is pulling in the response and the format that it’s pulling in so we can better understand how to help set up your step?

If you need additional help in building out your flow, feel free to email us (help@parabola.io) the URL Link to your Flow so we can get a better understanding of what you were hoping to build in your automation!

Yeah sorry for confusion on my end.

I have multiple end points with different IDs but same results format but could have different amount of key/values.

I had started with “Pull from API” and was able to use regex to fix the json format so it would parse with a “Expand JSON” correctly - eg not 3000+ columns but rows instead.

I then started using “Enrich with API” so I could pass the various endpoint IDs. At that point it crashed because the total column count went above 8000 because of horrible json format.

The Enrich tool seems to automatically parse the JSON vs leaving it raw the way Pull from API works. If enrich worked like pull I could move forward by fixing each row of raw json from each endpoint with regex and then expand json tool.

Does that help?

Hi @anthony - Stepping in for Ayana. Our Enrich with API step currently will expand that top row. We’d be happy to take a closer look at the Flow to see if we can suggest any workarounds. Could you please reach out to us at help@parabola.io and send over a link the Flow that you’re working on?