Integrating your internal systems

Transcend offers four methods to help you integrate your internal data systems. Depending on your existing infrastructure and the specific systems in question, you might choose different approaches; this page will help you decide the optimal approach for each system.

To implement a server webhook, your team will need to spin up a lightweight server that Transcend will notify each time a new DSR comes in. After being notified, the server will (asynchronously) run a custom script that queries the user's data (in the case of an access request) or performs some data modification (in the case of erasure and opt-outs). Finally, the server will notify Transcend of the respective job's completion (passing along the user's data in the case of access).

Creating a server webhook is a straightforward process, and we have example code to help you along the way. Moreover, once you've implemented one webhook, additional webhooks will look much the same, differing primarily in the logic of the custom script.

Pros:

  • All access to the database controlled by your engineering team (there is no sharing of credentials, and we treat your system as a black box)
  • Integrates seamlessly into any existing scripts you might have
  • Highly flexible in terms of the logic you want to implement. You could configure a single access webhook that queries from five different databases in five different ways, or you could configure multiple webhooks for one database to reflect your existing partitioning of user personal data.

Cons:

  • Requires some engineering lift to spin up the server and write the custom script (1-3 hrs)
  • Have to update the script logic if your underlying data/schemas change
  • Isn't ideal for batching, as Transcend notifies your server once per inbound DSR (of course, you can batch on your end, but that's more custom logic to consider)

To implement a cron job, your team will need to write a script that interacts with our DSR API. Each time the script is run, it will retrieve all pending requests, run the corresponding internal workflow(s) to process said requests (similar to the script for webhook processing), then notify our API of each job's completion (and, in the case of access, upload the relevant user data). This script can be run either as-needed or on some arbitrary interval at a specific time that best suits your needs (say, once a day at midnight).

A Python example of what such a script would look like can be found here. Importantly, with a cron job integration, you will not need to spin up a server to receive notifications from Transcend, since the script will be pulling the pending requests instead.

Pros

  • All access to the database controlled by your engineering team (there is no sharing of credentials, and we treat your system as a black box)
  • Integrates seamlessly into any existing scripts you might have
  • Does not require any engineering lift to spin up a server for a webhook
  • Rather than processing requests as they come in (the simplest way to manage the server webhook), can optimize for batch processing of DSRs when most convenient for your databases

Cons

  • Requires some engineering lift to write the custom script (1-3 hrs)
  • Have to update the script logic if your underlying data/schemas change
  • DSRs will process on a slower cadence than they would with a first-come-first-serve webhook approach (irrelevant to privacy SLAs, but your preference might be to resolve DSRs as rapidly as possible)

In some cases, you might have internal user data that can only be accessed or modified via some manual process. One example would be an internal dashboard whose data isn't readily exportable—in that case, someone would need to extract/modify the data manually. Automated vendor coordination optimizes for manual workflows like this.

While most often used to coordinate with external vendors, AVC can likewise coordinate with internal employees throughout your org to efficiently manage manual processes. When a new DSR comes in, an AVC integration automatically notifies (via email or in-app) the person (or persons) tasked with resolving manual workflows. After performing the workflow, that person then marks the AVC integration as complete (uploading any user data in the case of access). To expedite this process further, we built a Bulk Requests page that allows internal employees to view and resolve as many pending DSRs as they'd like, all in one place. Thus, AVC enables bulk processing of manual workflows, minimizing the need to repeat manual processes unnecessarily.

Pros

  • Ideal approach for integrating any user data deriving from a manual flow
  • Bulk Request UI means manual processes can be done for many DSRs at once
  • Very simple setup with no engineering lift

Cons

  • Involves a manual process that we ideally want to automate away
  • Inherently slower than fully automated approaches (though still within standard privacy SLAs)

We recently launched a database integration that connects directly to your internal systems. This approach eliminates any engineering lift required to to spin up a server webhook or configure a cron job with custom logic that queries/modifies user PII. In essence, this integration cuts out the "middle man" of a webhook or a cron job; instead, your internal systems can seamlessly integrate into the Admin Dashboard itself. You'll be able to write and test SQL queries directly into the Admin Dashboard that interact with your database as if you were doing so internally.

This sounds ideal, but there are a few caveats. For one, our database integration currently only supports SQL databases, so your non-SQL databases will likely require a webhook or cron job for now (that said, supporting non-SQL databases is on our roadmap!). Moreover, this feature is currently in MVP status, so it may require some help from our team to get your direct database connection to an ideal state.

Because of the sensitive nature of exposing an internal database to an external system like Transcend, implementing this integration would necessitate that you self-host our encryption gateway (Sombra). This gateway isolates Transcend entirely from being able to operate on your database—e.g., from running arbitrary SQL queries. Transcend is committed to a fully trustless architecture, so we've carefully designed this solution to ensure that your data is completely out of our hands.

As a look toward our near future, these database integrations will do much more than just connect to your systems directly. Upcoming features include: helping you discover where your user personal data is located in the first place, generating relevant SQL queries for you, detecting schema changes to automatically rewrite said queries, and providing meaningful visualizations of your systems themselves (mapping out where your user data lives). These many improvements are actively in development, so stay tuned.

Pros

  • Eliminates engineering lift required to spin up server webhooks or implementing cron jobs
  • Eliminates engineering lift required to maintain these servers and keep custom scripts up-to-date
  • Enables you to test SQL directly on your databases from within our platform, reducing iteration required to test that your workflows/queries are doing what they should be

Cons

  • MVP status means it may take some back and forth with our team to get your direct database connection working smoothly; moreover, the more advanced discovery/data-mapping features are not available just yet
  • Requires self-hosted security gateway to ensure proper security measures (the alternative would be taking on substantial liability regarding your internal data systems, which we do not recommend)
  • Not an efficient strategy if you already have some internal scripting in place that we could hook up via a webhook or cron job
  • Not ideal for batching in its current state, though that's another feature on our roadmap