I am new here and attempted to use Parabola to assist with data extract from GeoPath API and transform the data for my web app. I keep running into an oauth issue. The GeoPath API requires that you supply the api key value in the header object with the very specific, case sensitive ‘Geopath-API-Key’ key. For example
I know the UI for configuring Header Keys is a bit confusing right now, but you can actually type in a custom Header Key, you don’t need to select from our dropdown.
So, try configuring your Header Key like so. Don’t forget to paste your API Key into Header Value.
I found Parabola on MakerPad. I tested it with transforming coordinates to Addresses and I was blown away. It was about 120k coordinates and it was done in minutes.
I will be using it primarily to transform data from different api sources including data from my site going in/out.
Looks like I got it to work as a POST call. I will play with it some more to clean the data before exporting. Please advise on any tips to clean before csv export / API feedback into my site
@brian thanks for your message. currently, when I join data from 5 different sources there seems to be an accurate count, however, the rows are blank upon exporting the csv file. Also, duplicates are still present after adding in dedup tile. Please advise
also when I join tables, all the columns are replicated which I remove. Not sure if that causes blanks after I download the csv but I definitely see blanks before; like the image above.
Can you try joining your 5 sources before applying a single Dedupe step.Can you also send us a screenshot of how your Join step is configured and the results you see after you Join?
Does the CSV you downloaded after you ran your flow match the data you see in the “Results” tab of the CSV Export step you’ve highlighted in your screenshot above?
I moved dedupe further up in the process. The works fine now. Please see the images of join. It still produces blanks and creates duplicate records. Please advise
Hey Ari - that is because you have the top setting set to “Keep all rows in all tables” which means, if a match is not found by the join, it will still keep the rows, and create blank sections of the table to make it work. You probably want to keep all rows from the primary table, and keep those from the other tables that match.
If you scroll to the right of those blanks, you will see the data present but shifted
I am not having any luck with it Join at CSV level or JSON flattener We have don 70k rows had to do this 70 times. 7 times would be even more efficient. Overall the tool has saved use plenty of time. Down stream we would want to use API export and this would require Join considering we would be at ~500k rows of data. Any help would be nice
Can you provide more information on the data sets you’re trying to Join together?
Do they all share the same column structure?
I see you’re joining the tables based on the frameid field. Are there shared frameid values across these tables? I ask because if not, you could use the Table Merge step instead of the Join step and it’ll just stack these tables all together for you.
Glad to hear that worked! If your column structures are the same across your tables and you don’t expect duplicate rows across your tables, the Table Merge step is a simpler step to use over the Join step!
Just in case it’s helpful in the future, here are our educational videos on the Join and Table Merge steps: