I’m pulling from an API to send to Google Sheets, but there is some deep nested data in the JSON where I only need some specific values out of there. However, I want to maintain the amount of rows I have, and not have them multiplied based on the nesting levels.
Having extra rows means I need to find a way to merge them to avoid losing data.
If I can flatten them into extra columns, I can just delete the extra columns, but always keep all the values linked to each row.
I had a search but couldn’t find a related request.
Many thanks. Just discovered Parabola after 4 days of fighting to find a tool that can do this properly. Parabola perfectly fits the bill, and I’m SO close now!
Hi @Axel_Delgado- Welcome to the Parabola Community! Happy to help as you build this out. If the duplicate rows have overlapping information, it sounds like you may want to use a Merge duplicate rows step. This would merge the information from those rows into one.
Once you’ve done that, you would then expand that information into multiple columns using a Split column step.
Let me know if you have any questions!
I was asking to do it directly because other tools have the option to expand directly into a single row during extra columns.
I understand then that there’s no way around it and I have to do that process, correct?
I’m trying to do what you suggest, but I’m struggling.
The merge duplicate rows is the conflicting step and can’t find a way of doing what I want. Here’s the data structure before merging.
I’d like to merge here, but the “assignees username” column should merge only unique values, as there’s no point on repeating them. Same for the “custom_fields type_config options name”.
However, I can’t activate the check to merge only uniques, since then the and “custom_fields value” lose their correlation to the “custom_fields name” due to blanks and repeated numbers and I cannot use the values properly.
Any idea what can I do?
Hey @Axel_Delgado ,
Sorry to hear that the option to merge only unique values isn’t working on your flow. Would you mind emailing us at firstname.lastname@example.org a link to your Flow so we can troubleshoot your step?
No worries, I found a solution myself. It’s a bit of a hassle, but it works.
I created a sub table by removing all the columns with duplicated values, leaving only ID and the expanded dataset.
Then I pivot the columns to get the 229 original rows again, with all the values per ID in a single row.
And then I combine it back with the original table, that I have merged duplicated rows only with Unique values (losing data on those columns, but removing them anyway).
It looks like this: