Cron Job Integration

When a new data subject request is made, you can setup a "Cron Job" integration that will block the request from completing until a script is run to process the request. Your script can list out the set of requests that currently need to be processed, run the job to erase, opt out or find data related to that user, and then POST back to the Transcend API to notify that the job has been completed for that request.

This integration is similar to the Server Webhook integration except for the fact that your script will use our API to list active requests instead of receiving a webhook notification when new requests are made. Using the "Cron Job" integration, your team can manually run the script when it is most convenient, or put the script on a schedule to batch jobs at a time when the database is not actively being used.

We have examples on GitHub. There is a full working python example and a ruby example.

  1. Go to your Integrations to connect the "Cron Job" integration type.
  2. Give your integration a title (e.g. "Data Warehouse Script")
  3. Click Connect to create the integration.
  4. Navigate to this integration in your Integrations and select the "Manage Datapoints" tab to configure what types of requests your server should be notified about.
  5. Create a new API key for this integration and copy the key. This will be used to authenticate your requests to the Transcend API.
    Cron Job Diagram

Your script can list our the active data subject requests that need to be processed by your script.

See the API reference.

GET /v1/data-silo/<data-silo-id>/pending-requests/<action-type>

You can use offset based pagination if needed. The default page size is 50.

GET /v1/data-silo/<data-silo-id>/pending-requests/<action-type>?limit=20&offset=50

This route returns a JSON response:

{
"items": [
{
"identifier": "test@transcend.io",
"type": "email",
"nonce": "eyJhbGciOiJIUzM4NCIsInR5cCI6IkpXVCJ9.eyJlbmNyeXB0ZWRDRUtDb250ZXh0I ..."
}
]
}

Here identifier is the lookup key, the type is the unique name of the identifier defined in settings, and the nonce is an ID you will need to pass back when you go to upload data.

Using the items[*].type and items[*].identifier, implement the request type on your database or internal system. This part of the process is going to be a unique to your business. This may involve:

  • returning or removing rows from a database
  • returning or removing file from a filesystem
  • replacing fields containing personal data with anonymized placeholders

Please consult with your Transcend account representative on recommendations or guidelines for this process.

Once your server has successfully completed the processing of the request, you must send a POST request to Transcend that indicates that processing has been completed. In the case of a Data Subject Access Request, this will also include uploading any data associated with the end user.

Please refer to this guide for information on responding to DSRs.

Depending on the type of request and region where the data subject resides, the law requires the response to these requests within some specified period of time. This normally ranges between the range of 2 weeks to 2 months. With this in mind, if you have the ability to run your script at least once per week, this should normally be sufficient for any type of request.

Please consult with your Transcend account representative on recommendations or guidelines for this process. We recommend running the script once per day during the hours in which the database is under the least amount of stress.