Encrypt in Transit
We use TLS encryption for every internal and external communication between our services and external services. All of our application layer (layer 7) level communications are HTTPS based and network layer (layer 3-4) based communications are SSL based.
Encrypt at Rest
All of the collected user data and monitoring data is stored as encrypted with AWS KMS system by encryption keys. Also all of the snapshots and backups are encrypted as well at the place where they resides.
Thundra agents runs in user application, collects monitoring data (traces, metrics, logs) from both of the running application itself and underlying Lambda container and send them to Thundra Collector API to be ingested. Collected monitoring data is sent through HTTPS (TLS) securely. Authentication is done by the provided API keys, which are sent in the request headers to sing the request, by Thundra Console. After processing, received data is stored encrypted by AWS KMS at rest. By default all integrations (AWS SQS, AWS SNS, AWS Lambda, …, MySQL, PostgreSQL, HTTP, Redis, etc …) are enabled and they capture the outgoing requests (messages, queries, request bodies, commands, etc …). If there is sensitive data or you don’t want these requests data to be captured, you can always enabling masking them by configuration so they won’t be traced. Additionally, Thundra agent can trace user code base even method arguments, return values and local variables when line by line tracing is enabled. By these are disabled by default and we collected these low level details only when you enable. Even you enable them, we provide a programmatic API to mask completely or partially sensitive data yourself.
All of the communication between the user browser and the Thundra console is done securely through HTTPS (TLS). We are using JWT tokens with Auth0 for console authentication. For payment, we are using Stripe, which is certified to PCI Service Provider Level 1, (the most stringent level of certification available in the payments industry). So we don’t collect and store any information of your credit card as they are handled and managed by Stripe directly.
All of the data stores (as well as the internal and external services) are behind VPC and they are not accessible from the outside of the private network. At Thundra, accesses to data stores are restricted and only admins and operations team are allowed. Two-factor authentication is required for employees to access Thundra internal services and actions are audited by AWS CloudTrail logs.
Data retention depends on user’s pricing plan. Data retention is:
If you want to delete your account, you can contact us through Slack or firstname.lastname@example.org. We will respond with the confirmation of deletion in 24 hours.
All of the services and data stores in Thundra are designed to be highly available components. We use Aurora MySQL, DynamoDB and Elasticsearch to store collected data. Also collected monitoring data is backed up on AWS S3. AWS DynamoDB and S3 are highly available and resilient services as they run at multi AZ with backups. For Elasticsearch, we run multiple instances on multiple AZ and each shard has its own replicate located at another AZ. For Aurora MySQL, we have multiple read replicas on different AZ and region and in case of an outage, they can be promoted to master role.
In addition to data stores, both of our collector and console applications run as multiple instances on multi AZ behind Application Load Balancers and they can automatically scale up and down according to the system load. Besides collector and console applications, all of the remaining components of our backend are %100 serverless and by their nature they are highly available and scalable applications.
All of our data stores, RDS and Elasticsearch (and even caches, Elastic Cache / Redis), have daily backups so in case of disaster, they can be restored to the latest day. The remaining changes until the disaster time on the data store occurred after snapshot can be restored by replaying events from S3 backups. In addition to S3 backups, data retention of our Kinesis stream, which is the stream of collected monitoring data, is 7 days, so that in case of the catastrophic failure of Elasticsearch, we can replay the data to be ingested.