Deployment on AWS
This page describes the process and requirements for deploying a new Qrvey MultiPlatform Environment in AWS.
For all V9 installations, please first contact Customer Support to get your docker registry credentials.
Requirements
- Docker: The latest version of Docker should be installed.
- The Docker Image for the desired version, found in the release notes.
- The registry username and password provided by the Qrvey Support team.
- IAM user with Admin access, an access key, and a secret key: This is needed to create the resources for deployment.
- The VPC (or equivalent) that is being used to deploy the Qrvey Platform should have a minimum CIDR of
/22. - An S3 Bucket to store the state file. It should be in the same region as the deployment.
- SMTP configuration to send emails.
- A DNS Hosted Zone (Optional): To generate valid SSL Certificates for the Qrvey Composer domain. If there is no domain setup, we will generate one with the following format:
$deployment_id.mp.qrveyapp.com. To automatically set up a custom DNS, the Route 53 zone should be in the same account as the deployment, and credentials should have sufficient permissions. - If using an IAM user for deployment, here are the Minimum Required Permissions for Deployment:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*",
"elasticloadbalancing:*",
"autoscaling:*",
"eks:*",
"iam:*",
"route53:*",
"s3:*",
"secretsmanager:*",
"rds:*",
"rds-db:*",
"kms:*",
"cloudwatch:*",
"logs:*",
"acm:*",
"elasticfilesystem:*",
"ecr:*",
"ecr-public:*",
"events:*",
"ssm:*",
"sts:*",
"sqs:*",
"dynamodb:*",
"vpce:*",
"opensearch:*",
"cloudfront:CreateCloudFrontOriginAccessIdentity",
"athena:StartQueryExecution",
"athena:GetQueryExecution",
"athena:GetQueryResults",
"athena:GetDatabase",
"athena:CreateDataCatalog",
"glue:CreateDatabase",
"glue:GetDatabase",
"glue:GetDatabases",
"geo:*",
"geo-places:*",
"geo-routes:*"
],
"Resource": "*"
}
]
}
Installation
-
To install Qrvey
9.x.xin your AWS account, you need to create the following file:config.json.
For more details, please see the Configuration Variables section below.{
"account_config": {
"access_key_id": "<ACCESS_KEY>",
"secret_access_key": "<SECRET_KEY>>",
"region": "<REGION>",
"bucket": "<S3_BUCKET_TO_STORE_THE_STATE_FILE>",
"key": "<FILE_NAME>"
},
"variables": {
"registry_user": "<REGISTRY_USER_PROVIDED_BY_QRVEY_SUPPORT>",
"registry_key": "<REGISTRY_KEY_PROVIDED_BY_QRVEY_SUPPORT>",
"qrvey_chart_version": "<QRVEY_VERSION>", // found at the end of the docker image provided above under pre-requisites
"enable_location_services": true,
"es_config": {
"size": "large", // can be small, medium, or large
"count": 1
},
"customer_info": {
"firstname": "",
"lastname": "",
"email": "email@company.com",
"company": "<COMPANY_NAME>"
},
"initial_admin_email": "admin@company.tld",
"globalization": {
"google_client_email": "", // optional
"google_client_private_key": "", // optional
"google_document_id": "", // optional
"google_document_sheet_title": "" // optional
},
"enable_trino": false // optional
}
}Once the above prerequisites are ready, run the following commands to install Qrvey.
-
From your terminal, navigate to the directory that contains the config file above.
-
Use the following command to log in to the Qrvey Registry.
docker login qrvey.azurecr.io --username $registry_user --password-stdin <<< $registry_key -
Run the installation commands with the desired Terraform option:
plan,apply,output, ordestroy.
For installation, use theapplyoption. The installation process should take about two hours.# This command is for MAC. Choose the platform param as required based on your OS.
docker run --platform=linux/amd64 -v $(pwd)/config.json:/app/qrvey/config.json -it --rm qrvey.azurecr.io/qrvey-terraform-aws:${qrvey_version} applyAfter running the
applycommand, wait until the process is complete and review the resources created. -
You may run the following command to get environment outputs, including the admin username and password, to log in to Qrvey. Note: The command below is for MAC. Set the
--platformparam as required based on your OS.
docker run --platform=linux/amd64 -v $(pwd)/config.json:/app/qrvey/config.json -it --rm qrvey.azurecr.io/qrvey-terraform-aws:${qrvey_version} output
##########
### ####
### ### +++ +++ +++ +++++ +++ ++
### ### ++++ +++ +++ +++ +++ ++ +++
### ### ++ ++ +++ +++ ++++ ++ +++
### ### ++ ++ ++ ++++++++ +++ ++
### ### ++ ++++++ +++ ++++++
#### ##### ++ ++++ +++ +++ ++++
######## ++ ++ +++++++ +++
##### ++
######## ++++
# ENVIRONMENT DETAILS
DEPLOYMENT_ID: deployment-id
URL: https://deployment-id.mp.qrveyapp.com
ADMIN URL: https://deployment-id.mp.qrveyapp.com/admin/app/
ADMIN USER: admin@company.tld
ADMIN PASSWORD: generated_admin_password
APIKEY: qrvey_api_key
PostgresqlConnection: postgres://qrvey_usr:db_password@deployment-id-qrvey-db.postgres.database.azure.com:5432/postgres
ES_HOST: https://1.2.3.4:9200/
ES_USERNAME: elastic
ES_PASSWORD: elastic_password
-
Navigate to your Qrvey domain and log in to the platform.

Note:
If you want to use a custom domain for your deployment, set the property"dns_zone_name"under the"variables"object in yourconfig.jsonto the desired URL.
After deployment, you will receive a Load Balancer URL in the output. Set this Load Balancer URL as the target for the CNAME record of your custom domain in your DNS provider.
Upgrading to a Newer Version
To upgrade your Qrvey MultiPlatform Environment to a newer version, follow the same steps as in the Installation section above. The only change required is in Step 1:
- Update the
qrvey_chart_versionvariable in yourconfig.jsonfile to the desired new version.
After updating the version, repeat Steps 2–6 from the Installation section, using the updated qrvey_version value in the relevant commands.
This will apply the upgrade and update your environment to the specified version.
Note: When upgrading from a version earlier than 9.2.1 to 9.2.1 or later, add the
--refresh-helmflag to theapplycommand. This cleans up removed microservices, but may cause a few minutes of downtime—plan upgrades during off hours.
Switching from Multi-AZ to Single-AZ Configuration
If you are upgrading from v9.0.x or v9.1.x to v9.2 or later, note that previous versions used a multi-AZ configuration, where nodes could be distributed across different Availability Zones (AZs). This can result in intra-AZ data transfer costs.
To switch your instance to use a single-AZ configuration after upgrading, follow these steps:
- Before upgrading: Set the property
single_az_modeunder thevariablesobject in yourconfig.jsontofalse(to match the existing deployment). - Upgrade to v9.2 (or later) by following the upgrade steps above.
- After the upgrade completes successfully: Change the
single_az_modevariable totruein yourconfig.json. - Run the apply command again with the flag
--migrate-to-single-az:
docker run --platform=linux/amd64 -v $(pwd)/config.json:/app/qrvey/config.json -it --rm qrvey.azurecr.io/qrvey-terraform-aws:${qrvey_version} apply --migrate-to-single-az
To check if your deployment is using a single AZ or multiple AZs, you can run the following command:
docker run --platform=linux/amd64 -v $(pwd)/config.json:/app/qrvey/config.json -it --rm qrvey.azurecr.io/qrvey-terraform-aws:${qrvey_version} validate --az-status
This will return the number of AZs used in your instance.
Removing an Instance
To remove (destroy) a Qrvey MultiPlatform Environment instance and all associated resources, follow these steps:
- Navigate to the directory containing your
config.jsonfile. - Run the destroy command to preview the resources that will be removed (similar to a Terraform "plan"):
docker run --platform=linux/amd64 -v $(pwd)/config.json:/app/qrvey/config.json -it --rm qrvey.azurecr.io/qrvey-terraform-aws:${qrvey_version} destroy
- To actually remove all resources, run the destroy command with the
--approveflag:
docker run --platform=linux/amd64 -v $(pwd)/config.json:/app/qrvey/config.json -it --rm qrvey.azurecr.io/qrvey-terraform-aws:${qrvey_version} destroy --approve
Warning: Once the resources are removed, all data and metadata associated with the instance will be permanently deleted and cannot be recovered.
Configuration Variables
This section describes the input variables available for AWS deployment using Terraform. Each variable can be customized to fit your deployment requirements. Refer to the table below for variable names, types, default values, and descriptions.
| Variable Name | Type | Default Value | Description |
|---|---|---|---|
api_key | string | "" | API Key for migrated instances |
access_key_id | string | "" | AWS account access key |
region | string | "us-east-1" | AWS region for resource deployment |
secret_access_key | string | "" | AWS account secret key |
session_token | string | null | AWS session token |
azs | list(string) | null | Availability zones for subnet creation |
chart_name | string | "qrvey" | Name of the chart to deploy |
chart_values | list(object) | [] | Chart values (name, value, type) |
create_vpc_endpoints | bool | true | Whether to create VPC endpoints |
customer_info | object | {} | Required. An object containing customer information. |
deployment_id | string | "" | Deployment ID (for migrations) |
dns_zone_name | string | "" | DNS zone name |
elasticsearch | object | {} | Existing Elasticsearch engine data (host, auth_user, auth_password, cluster_name, version) |
enable_location_services | bool | false | Enable location services |
enable_trino | bool | false | Deploy Trino Helm chart |
es_config | object | {} | Elasticsearch config (name, size, count, storage) |
globalization | object | {} | Globalization settings (google_client_email, google_client_private_key, and so on.) |
initial_admin_email | string | "" | Required. Initial admin email. |
intra_subnets_cidrs | list(string) | ["10.110.201.0/24", "10.110.202.0/24"] | Intra subnets |
openai_api_key | string | "sk-xxxxxxxxxxxxxxxxxxxxxx" | OpenAI API key |
postgresql_config | object | {} | PostgreSQL config (name, instance_class, version) |
private_subnets_cidrs | list(string) | ["10.110.1.0/24", "10.110.2.0/24", "10.110.32.0/20", "10.110.48.0/20"] | Private subnets |
public_subnets_cidrs | list(string) | ["10.110.101.0/24", "10.110.102.0/24"] | Public subnets |
qrvey_chart_version | string | "" | Required. Qrvey chart version |
rabbitmq_service_internal | bool | true | Use internal RabbitMQ service (true for ServiceIP, false for LoadBalancer) |
registry_key | string | "" | Required. Qrvey registry key. |
registry_user | string | "" | Required. Qrvey registry user. |
s3_bucket | object | {} | Existing S3 bucket configuration |
single_az_mode | bool | false | If set to true, deploys all resources in a single Availability Zone (AZ). Set to false for multi-AZ deployments. Useful for controlling intra-AZ data transfer costs. |
table_hierarchy_enabled | bool | false | Enable table hierarchy feature |
trino_config | object | {} | Trino config (name, size, count) |
use_athena_from_serverless | bool | false | Use Athena from serverless |
use_existing_vpc | bool | false | Use an existing VPC |
use_public_subnet_for_db | bool | false | Use a public subnet for the database |
vpc_cidr | string | "10.110.0.0/16" | VPC CIDR block |
vpc_details | object | null | VPC details (vpc_id, public_subnets, private_subnets, intra_subnets) |
chart_values
[
{
"name": "string",
"value": "string",
"type": "string"
}
]
customer_info
{
"firstname": "string",
"lastname": "string",
"email": "string",
"company": "string"
}
elasticsearch
{
"host": "", // optional, default
"auth_user": "elastic", // optional, default
"auth_password": "", // optional, default
"cluster_name": "elasticsearch-es-internal-http.elastic-system.svc.cluster.local", // optional, default
"version": "7.10" // optional, default
}
es_config
{
"name": "elasticsearch", // optional, default
"size": "medium", // optional, default
"count": 1, // optional, default
"storage": "200Gi" // optional, default
}
size Parameter Options
| Size | node_size | JVM_MEM | POD_CPU | POD_MEM |
|---|---|---|---|---|
| small | m5.large | 2g | 1 | 4Gi |
| medium | r6i.large | 4g | 1 | 8Gi |
| large | r6i.xlarge | 12g | 2 | 24Gi |
| xlarge | r6i.2xlarge | 18g | 4 | 35Gi |
| 2xlarge | r6i.2xlarge | 24g | 4 | 52Gi |
| 4xlarge | r6i.4xlarge | 31g | 24 | 120Gi |
globalization
{
"google_client_email": "", // optional, default
"google_client_private_key": "", // optional, default
"google_document_id": "", // optional, default
"google_document_sheet_title": "" // optional, default
}
postgresql_config
{
"name": "postgresql", // optional, default
"instance_class": "db.t3.medium", // optional, default
"version": "16.3" // optional, default
}
s3_bucket
{
"qrveyuserfiles": "", // optional, default
"use_cloudfront": "true", // optional, default
"drchunkdata": "", // optional, default
"drdatacommons": "", // optional, default
"drdatalake": "", // optional, default
"config": "", // optional, default
"basedatasets": "" // optional, default
}
trino_config
{
"name": "trino", // optional, default
"size": "small", // optional, default
"count": 2 // optional, default
}
vpc_details
{
"vpc_id": "string",
"public_subnets": ["string"],
"private_subnets": ["string"],
"intra_subnets": ["string"] // optional
}
For Existing VPCs:
If you are using an existing VPC and subnets, you must manually add the following tag to your private subnets and the security group named qrvey-eks-<deploymentid>-node:
karpenter.sh/discovery : qrvey-eks-<deploymentid>
This tag is required by Karpenter to create nodes. These tags are not automatically added when using existing VPCs. After adding the tags, connect to the cluster and delete any failed Helm charts as needed before proceeding with the apply step.
Troubleshooting
Helm Release Error: Another Operation in Progress
If the deployment fails with the following error:
Error: another operation (install/upgrade/rollback) is in progress
with helm_release.qrvey[0],
on k8s-cr.tf line 606, in resource "helm_release" "qrvey":
606: resource "helm_release" "qrvey" {
Then add the --refresh-helm flag after the apply command:
docker run --platform=linux/amd64 -v $(pwd)/config.json:/app/qrvey/config.json -it --rm qrvey.azurecr.io/qrvey-terraform-aws:${qrvey_version} apply --refresh-helm
Note: This flag should be used only in these cases, as it triggers an aggressive upgrade process in which Qrvey containers are forcefully recreated.
Services or Pods Not Starting Due to Spot Instances Disabled
In some new AWS accounts, Spot Instances may be disabled by default. If, after a new deployment, you notice that services or pods are not coming up and there are no obvious errors, it may be due to Spot Instances not being enabled in your AWS account.
To enable Spot Instances, run the following AWS CLI command:
aws iam create-service-linked-role --aws-service-name spot.amazonaws.com
After running this command, retry your deployment.