Iterable to Panoply

This page provides you with instructions on how to extract data from Iterable and load it into Panoply. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)

What is Iterable?

Iterable hosts a growth marketing platform that provides omnichannel customer engagement through email, SMS, web push, and other channels. Marketers can use a drag-and-drop interface to set up campaign workflows.

What is Panoply?

Panoply can spin up a new Amazon Redshift instance in just a few clicks. Panoply's managed data warehouse service uses machine learning and natural language processing (NLP) to learn, model, and automate data management activities from source to analysis. It can import data with no schema, no modeling, and no configuration, and lets you use analysis, SQL, and visualization tools just as you would if you were creating a Redshift data warehouse on your own.

Getting data out of Iterable

Iterable exposes data through webhooks, which you can create at Integrations > Webhooks. You must specify the URL the webhook should use to POST data, and choose an authorization type. Edit the webhook, tick the Enabled box, select the events you'd like to send data to the webhook for, and save your changes.

Sample Iterable data

Iterable returns data in JSON format. Here’s an example of the data returned for an email unsubscribe event:
{
   "email": "sheldon@iterable.com",
   "eventName": "emailUnSubscribe",
   "dataFields": {
      "unsubSource": "EmailLink",
      "email": "sheldon@iterable.com",
      "createdAt": "2017-12-02 22:13:05 +00:00",
      "campaignId": 59667,
      "templateId": 93849,
      "messageId": "d3c44d47b4994306b4db8d16a94db025",
      "emailSubject": "Welcome to JM Photography at {{now}}",
      "campaignName": "Test the NOW handlebars",
      "workflowId": null,
      "workflowName": null,
      "templateName": "Sample photography welcome",
      "channelId": 3420,
      "messageTypeId": 3866,
      "experimentId": null,
      "emailId": "c59667:t93849:sheldon@iterable.com"
   }
}

Preparing Iterable data

If you don't already have a data structure in which to store the data you retrieve, you'll have to create a schema for your data tables. Then, for each value in the response, you'll need to identify a predefined datatype (INTEGER, DATETIME, etc.) and build a table that can receive them. Iterable's documentation should tell you what fields are provided by each endpoint, along with their corresponding datatypes.

Complicating things is the fact that the records retrieved from the source may not always be "flat" – some of the objects may actually be lists. This means you'll likely have to create additional tables to capture the unpredictable cardinality in each record.

Loading data into Panoply

When you've identified all the columns you want to insert, use the Reshift CREATE TABLE statement to make a table in your data warehouse to receive the data.

Now you can replicate your data. It may seem as if the easiest way to do that (especially if there isn't much of it) is to build INSERT statements and add data to your table row by row. If you have any experience with SQL, this probably will be your first inclination. But beware! Redshift isn't optimized for inserting data one row at a time. If you have a high volume of data to be inserted, you should instead load the data into Amazon S3 and then use the Redshift COPY command to import it into Redshift.

Keeping Iterable data up to date

Once you've set up the webhooks you want and have begun collecting data, you can relax – as long as everything continues to work correctly. You'll have to keep an eye out for any changes to Iterable's webhooks implementation.

Other data warehouse options

Panoply is great, but sometimes you need to optimize for different things when you're choosing a data warehouse. Some folks choose to go with Amazon Redshift, Google BigQuery, PostgreSQL, Snowflake, or Microsoft Azure Synapse Analytics, which are RDBMSes that use similar SQL syntax. Others choose a data lake, like Amazon S3 or Delta Lake on Databricks. If you're interested in seeing the relevant steps for loading data into one of these platforms, check out To Redshift, To BigQuery, To Postgres, To Snowflake, To Azure SQL Data Warehouse, To S3, and To Delta Lake.

Easier and faster alternatives

If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.

Thankfully, products like Stitch were built to move data from Iterable to Panoply automatically. With just a few clicks, Stitch starts extracting your Iterable data, structuring it in a way that's optimized for analysis, and inserting that data into your Panoply data warehouse.