Integrating your internal systems
Transcend offers four methods to help you integrate your internal data systems. Depending on your existing infrastructure and the specific systems in question, you might choose different approaches; this page will help you decide the optimal approach for each system.
To implement a server webhook, your team will need to spin up a lightweight server that Transcend will notify each time a new DSR comes in. After being notified, the server will (asynchronously) run a custom script that queries the user's data (in the case of an access request) or performs some data modification (in the case of erasure and opt-outs). Finally, the server will notify Transcend of the respective job's completion (passing along the user's data in the case of access).
Creating a server webhook is a straightforward process, and we have example code to help you along the way. Moreover, once you've implemented one webhook, additional webhooks will look much the same, differing primarily in the logic of the custom script.
Pros:
- All access to the database controlled by your engineering team (there is no sharing of credentials, and we treat your system as a black box)
- Integrates seamlessly into any existing scripts you might have
- Highly flexible in terms of the logic you want to implement. You could configure a single access webhook that queries from five different databases in five different ways, or you could configure multiple webhooks for one database to reflect your existing partitioning of user personal data.
Cons:
- Requires some engineering lift to spin up the server and write the custom script (1-3 hrs)
- Have to update the script logic if your underlying data/schemas change
- Isn't ideal for batching, as Transcend notifies your server once per inbound DSR (of course, you can batch on your end, but that's more custom logic to consider)
To implement a cron job, your team will need to write a script that interacts with our DSR API. Each time the script is run, it will retrieve all pending requests, run the corresponding internal workflow(s) to process said requests (similar to the script for webhook processing), then notify our API of each job's completion (and, in the case of access, upload the relevant user data). This script can be run either as-needed or on some arbitrary interval at a specific time that best suits your needs (say, once a day at midnight).
A Python example of what such a script would look like can be found here. Importantly, with a cron job integration, you will not need to spin up a server to receive notifications from Transcend, since the script will be pulling the pending requests instead.
Pros
- All access to the database controlled by your engineering team (there is no sharing of credentials, and we treat your system as a black box)
- Integrates seamlessly into any existing scripts you might have
- Does not require any engineering lift to spin up a server for a webhook
- Rather than processing requests as they come in (the simplest way to manage the server webhook), can optimize for batch processing of DSRs when most convenient for your databases
Cons
- Requires some engineering lift to write the custom script (1-3 hrs)
- Have to update the script logic if your underlying data/schemas change
- DSRs will process on a slower cadence than they would with a first-come-first-serve webhook approach (irrelevant to privacy SLAs, but your preference might be to resolve DSRs as rapidly as possible)
In some cases, you might have internal user data that can only be accessed or modified via some manual process. One example would be an internal dashboard whose data isn't readily exportable—in that case, someone would need to extract/modify the data manually. Automated vendor coordination optimizes for manual workflows like this.
While most often used to coordinate with external vendors, AVC can likewise coordinate with internal employees throughout your org to efficiently manage manual processes. When a new DSR comes in, an AVC integration automatically notifies (via email or in-app) the person (or persons) tasked with resolving manual workflows. After performing the workflow, that person then marks the AVC integration as complete (uploading any user data in the case of access). To expedite this process further, we built a Bulk Requests page that allows internal employees to view and resolve as many pending DSRs as they'd like, all in one place. Thus, AVC enables bulk processing of manual workflows, minimizing the need to repeat manual processes unnecessarily.
Pros
- Ideal approach for integrating any user data deriving from a manual flow
- Bulk Request UI means manual processes can be done for many DSRs at once
- Very simple setup with no engineering lift
Cons
- Involves a manual process that we ideally want to automate away
- Inherently slower than fully automated approaches (though still within standard privacy SLAs)
Transcend can integrate directly with your database. This approach eliminates any engineering lift required to spin up a server webhook or configure a cron job with custom logic that queries/modifies user PII. In essence, this integration cuts out the "middle man" of a webhook or a cron job; instead, your internal systems can seamlessly integrate into the Admin Dashboard itself. You'll be able to write and test SQL queries directly into the Admin Dashboard that interact with your database as if you were doing so internally.
Because of the sensitive nature of exposing an internal database to an external system like Transcend, it is recommended to self-host our encryption gateway Sombra. This gateway isolates Transcend entirely from being able to operate on your database—e.g., from running arbitrary SQL queries. Transcend is committed to a fully trust-less architecture, so we've carefully designed this solution to ensure that your data is completely out of our hands.
In addition to not requiring any code, Transcend's Content Classification product can discover and classify personal data in your databases. Using these classifications, you can generate deterministic SQL statements that look up or delete users based on personally identifiable information such as an email, phone number, or user ID. By leveraging Transcend's Data Mapping and Privacy Request products together, you can keep your Access & Erasure queries up to date as your database schemas change, all without maintaining any code.
Pros
- Eliminates engineering lift required to spin up server webhooks or implementing cron jobs
- Eliminates engineering lift required to maintain these servers and keep custom scripts up-to-date
- Enables you to test SQL directly on your databases from within our platform, reducing iterations required to test that workflow
- Enables you to leverage our Content Classification and SQL generation products in order to maintain your privacy request workflows as the database schema changes
- Scalable method for handling privacy request workflows when integrating dozens or hundreds of databases
Cons
- Requires self-hosted security gateway to ensure most secure integration method
- You are limited to SQL only operations compared to having full flexibility of coding
- If you already have some internal scripting in place, setting up the database integration may be slower than directly integrating the existing script