Skip to main content
Version: 9.3

Deployment on AWS

This page describes the process and requirements for deploying a new Qrvey MultiPlatform Environment in AWS. For v9 installations, contact Customer Support to obtain your Docker registry credentials.

EKS Support

Elastic Kubernetes Service (EKS) v1.33 is the default version starting in v9.2.5. EKS v1.32 discontinues standard support on March 23, 2026. Qrvey recommends that all v9.2.x customers upgrade to v9.2.5. If you have a question about EKS support on the Qrvey platform, contact Qrvey Support.

Requirements

  • Docker: The latest version of Docker should be installed.

  • Docker image: For version information, see the release notes.

  • EC2 Quota: At least 56 available vCPUs in the EC2 quota for "Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances" (Quota ID: L-1216C47A).

  • Athena service quotas (recommended): Ask AWS to increase Amazon Athena Active DDL queries (Quota ID: L-3CE0BBA0) to 100+ and Active DML queries (Quota ID: L-FC5F6546) to 500+ to allow multiple datasets with joins to sync data and improve performance.

  • IAM user with Admin access, an access key, and a secret key: This is needed to create the resources for deployment.

  • Registry username and password provided by the Qrvey Support team.

  • S3 Bucket to store the state file. It should be in the same region as the deployment.

  • SMTP configuration to send emails.

  • VPC (or equivalent) that is being used to deploy the Qrvey Platform, using a minimum CIDR of /22.

  • DNS Hosted Zone (Optional): To generate valid SSL Certificates for the Qrvey Composer domain. If there is no domain setup, Qrvey generates one with the following format: $deployment_id.mp.qrveyapp.com. To automatically set up a custom DNS, the Route 53 zone should be in the same account as the deployment, and credentials should have sufficient permissions.

  • If using an IAM user for deployment, you need the following minimum required permissions for deployment:

    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Effect": "Allow",
    "Action": [
    "ec2:*",
    "elasticloadbalancing:*",
    "autoscaling:*",
    "eks:*",
    "iam:*",
    "route53:*",
    "s3:*",
    "secretsmanager:*",
    "rds:*",
    "rds-db:*",
    "kms:*",
    "cloudwatch:*",
    "logs:*",
    "acm:*",
    "elasticfilesystem:*",
    "ecr:*",
    "ecr-public:*",
    "events:*",
    "ssm:*",
    "sts:*",
    "sqs:*",
    "dynamodb:*",
    "vpce:*",
    "opensearch:*",
    "cloudfront:CreateCloudFrontOriginAccessIdentity",
    "athena:*",
    "athena:StartQueryExecution",
    "athena:GetQueryExecution",
    "athena:GetQueryResults",
    "athena:GetDatabase",
    "athena:CreateDataCatalog",
    "glue:CreateDatabase",
    "glue:GetDatabase",
    "glue:GetDatabases",
    "geo:*",
    "geo-places:*",
    "geo-routes:*",
    "sns:*",
    "cloudformation:*",
    "cloudfront:*",
    "lambda:*",
    "ecs:UpdateService",
    "glue:*",
    "es:*"
    ],
    "Resource": "*"
    }
    ]
    }

Note: If you have enabled AWS GuardDuty in your account, you might see an alert for escalated privileges during deployment or upgrade. This is a false positive and can be safely ignored. The deployment process needs to create IAM roles and attach them to resources, which requires permissions to create other roles. This legitimate activity triggers the GuardDuty alert.

OpenSearch Cluster Options

Qrvey supports two mutually exclusive options for the OpenSearch cluster used for indexing and search. You must configure exactly one. Do not configure both in the same deployment.

Use the opensearch_config variable to deploy a managed AWS OpenSearch Service domain in the private subnets of your VPC. For configuration details, see opensearch_config.

Option 2: In-Cluster Elasticsearch (ECK)

Use the es_config variable to deploy an Elasticsearch cluster inside the EKS cluster, managed by the Elastic Cloud on Kubernetes (ECK) operator. For configuration details, see es_config.

Note: If you are upgrading from a Qrvey version that used a public AWS OpenSearch domain and need to move it to a private VPC configuration, see Migrate Public OpenSearch to VPC OpenSearch.

Installation

  1. To install Qrvey 9.x.x in your AWS account, create a config.json file.

    For more information, see AWS Deployment Input Variables.

    {
    "account_config": {
    "access_key_id": "<ACCESS_KEY>",
    "secret_access_key": "<SECRET_KEY>",
    "region": "<REGION>",
    "bucket": "<S3_BUCKET_TO_STORE_THE_STATE_FILE>",
    "key": "<FILE_NAME>"
    },
    "variables": {
    "registry_user": "<REGISTRY_USER_PROVIDED_BY_QRVEY_SUPPORT>",
    "registry_key": "<REGISTRY_KEY_PROVIDED_BY_QRVEY_SUPPORT>",
    "qrvey_chart_version": "<QRVEY_VERSION>", // found at the end of the docker image provided above under pre-requisites
    "enable_location_services": true,
    // Configure one of the following OpenSearch options (mutually exclusive):
    // Option 1: AWS OpenSearch Service in a private VPC (recommended)
    "opensearch_config": {
    "enabled": true,
    "instance_type": "r6g.large.search",
    "instance_count": 3,
    "volume_size": 100
    },
    // Option 2: In-cluster Elasticsearch (ECK)
    //"es_config": {
    // "size": "large", // can be small, //medium, or large
    // "count": 1
    //},
    "customer_info": {
    "firstname": "",
    "lastname": "",
    "email": "email@company.com",
    "company": "<COMPANY_NAME>"
    },
    "initial_admin_email": "admin@company.tld",
    "postgresql_config" : {
    "version" : "16.6"
    },
    "globalization": {
    "google_client_email": "", // optional
    "google_client_private_key": "", // optional
    "google_document_id": "", // optional
    "google_document_sheet_title": "" // optional
    },
    "enable_trino": false // optional
    }
    }

    When these prerequisites are ready, you can install Qrvey.

  2. From your terminal, navigate to the directory that contains the configuration file.

  3. Use the following command to log into the Qrvey Registry:

    docker login qrvey.azurecr.io --username $registry_user --password-stdin <<< $registry_key
  4. Run the installation commands with the desired Terraform option: plan, apply, output, or destroy.
    For installation, use the apply option. The installation process should take about two hours.

    # This command is for MAC. Choose the platform param as required based on your OS.
    docker run --platform=linux/amd64 -v $(pwd)/config.json:/app/qrvey/config.json -it --rm qrvey.azurecr.io/qrvey-terraform-aws:${qrvey_version} apply

    After running the apply command, wait until the process is complete and review the resources created.

  5. You can run the following command to get environment outputs, including the admin username and password, to log into Qrvey. The following command is for MAC. Set the --platform paramater as required based on your OS.

    docker run --platform=linux/amd64 -v $(pwd)/config.json:/app/qrvey/config.json -it --rm qrvey.azurecr.io/qrvey-terraform-aws:${qrvey_version} output
        ##########
    ### ####
    ### ### +++ +++ +++ +++++ +++ ++
    ### ### ++++ +++ +++ +++ +++ ++ +++
    ### ### ++ ++ +++ +++ ++++ ++ +++
    ### ### ++ ++ ++ ++++++++ +++ ++
    ### ### ++ ++++++ +++ ++++++
    #### ##### ++ ++++ +++ +++ ++++
    ######## ++ ++ +++++++ +++
    ##### ++
    ######## ++++

    # ENVIRONMENT DETAILS
    DEPLOYMENT_ID: deployment-id
    URL: https://deployment-id.mp.qrveyapp.com
    ADMIN URL: https://deployment-id.mp.qrveyapp.com/admin/app/
    ADMIN USER: admin@company.tld
    ADMIN PASSWORD: generated_admin_password
    APIKEY: qrvey_api_key
    PostgresqlConnection: postgres://qrvey_usr:db_password@deployment-id-qrvey-db.postgres.database.azure.com:5432/postgres
    ES_HOST: https://1.2.3.4:9200/
    ES_USERNAME: elastic
    ES_PASSWORD: elastic_password
  6. Navigate to your Qrvey domain and log into the platform.

    Login Page

Note: To use a custom domain for your deployment, set the property "dns_zone_name" under the "variables" object in your config.json to the desired URL. After deployment, you receive a Load Balancer URL in the output. Set this load balancer URL as the target for the CNAME record of your custom domain in your DNS provider.

Upgrade to a Newer Version

An upgrade uses similar steps to an installation.

Note: The upgrade process can take up to 2 hours of downtime. Plan to perform upgrades during off hours.

Before You Begin

  • Before upgrading from v9.0.x or v9.1.x to v9.2 or later and switching to use a single-AZ configuration after upgrading, review Switch from Multi-AZ to Single-AZ Configuration.

  • Before upgrading from v9.0.x to v9.2.2, verify whether data-load performance needs to be improved. This can be done using the dataload_config property.

Perform the Upgrade

To upgrade your Qrvey MultiPlatform Environment to a newer version, follow the same steps as in the Installation section. The only change required is in Step 1:

  • Update the qrvey_chart_version variable in your config.json file to the new version.

After updating the version, repeat Steps 2–6 from the Installation section, using the updated qrvey_version value in the relevant commands.

This applies the upgrade and updates your environment to the specified version.

Switch from Multi-AZ to Single-AZ Configuration

Previous Qrvey versions used a multi-AZ configuration, where nodes could be distributed across different Availability Zones (AZs). If you are upgrading from v9.0.x or v9.1.x to v9.2, this can result in intra-AZ data transfer costs.

To switch your instance to use a single-AZ configuration after upgrading:

  1. Before upgrading, set the property single_az_mode under the variables object in your config.json to false (to match the existing deployment).
  2. Upgrade to v9.2 or later.
  3. After the upgrade completes successfully, change the single_az_mode variable to true in your config.json.
  4. Run the apply command again with the flag --migrate-to-single-az:
docker run --platform=linux/amd64 -v $(pwd)/config.json:/app/qrvey/config.json -it --rm qrvey.azurecr.io/qrvey-terraform-aws:${qrvey_version} apply --migrate-to-single-az

To check if your deployment is using a single AZ or multiple AZs, you can run the following command:

docker run --platform=linux/amd64 -v $(pwd)/config.json:/app/qrvey/config.json -it --rm qrvey.azurecr.io/qrvey-terraform-aws:${qrvey_version} validate --az-status

This returns the number of AZs used in your instance.

Customize a Deployment

You can customize your existing Qrvey deployment by modifying the parameters in your config.json file and re-applying the configuration. This allows you to change various settings without needing to redeploy from scratch.

  1. Navigate to the directory containing your config.json file.

  2. Edit the config.json file and modify the desired parameters under the "variables" object.

  3. Save the changes.

  4. Run the apply command to update your deployment:

    docker run --platform=linux/amd64 -v $(pwd)/config.json:/app/qrvey/config.json -it --rm qrvey.azurecr.io/qrvey-terraform-aws:${qrvey_version} apply
  5. Wait for the process to complete and review the output.

Example: Change the Domain (DNS) for an Existing Instance

A common customization is setting up a custom domain for your Qrvey deployment.

  1. Update your config.json file:

    Set the dns_zone_name property under the "variables" object to your custom domain:

    {
    "account_config": {
    // ... existing account config ...
    },
    "variables": {
    // ... other variables ...
    "dns_zone_name": "qrvey.yourdomain.com"
    }
    }
  2. Apply the changes:

    Run the apply command to update your deployment:

    docker run --platform=linux/amd64 -v $(pwd)/config.json:/app/qrvey/config.json -it --rm qrvey.azurecr.io/qrvey-terraform-aws:${qrvey_version} apply
  3. Configure DNS:

    After the apply process completes, the output includes a Load Balancer URL such as the following:

    Load Balancer URL: abc123-1234567890.us-east-1.elb.amazonaws.com

    You need to add a CNAME record in your DNS provider:

    • Name/Host: qrvey (or your desired subdomain)
    • Type: CNAME
    • Value/Target: The Load Balancer URL from the output (for example, abc123-1234567890.us-east-1.elb.amazonaws.com)
    • TTL: 300 (or your preferred value)

Wait for DNS propagation. After you've added the CNAME record, it can take a few minutes to several hours for DNS changes to propagate globally, depending on your DNS provider and TTL settings.

  1. Access your deployment:

    After DNS propagation is complete, you can access your Qrvey deployment using your custom domain (for example, https://qrvey.yourdomain.com).

Note: If you use AWS Route 53 for DNS management, when the hosted zone is in the same AWS account as your deployment, the CNAME record can be automatically created during the apply process if you have the necessary permissions configured.

Remove an Instance

To remove (destroy) a Qrvey MultiPlatform Environment instance and all associated resources:

  1. Navigate to the directory containing your config.json file.

  2. Run the destroy command to preview the resources that will be removed (similar to a Terraform "plan"):

    docker run --platform=linux/amd64 -v $(pwd)/config.json:/app/qrvey/config.json -it --rm qrvey.azurecr.io/qrvey-terraform-aws:${qrvey_version} destroy
  3. To actually remove all resources, run the destroy command with the --approve flag:

    docker run --platform=linux/amd64 -v $(pwd)/config.json:/app/qrvey/config.json -it --rm qrvey.azurecr.io/qrvey-terraform-aws:${qrvey_version} destroy --approve

Warning: When the resources are removed, all data and metadata associated with the instance is permanently deleted.

AWS Deployment Input Variables

This following input variables are available for AWS deployment using Terraform. Each variable can be customized to fit your deployment requirements.

Variable NameTypeDefault ValueDescription
api_keystring""API Key for migrated instances.
access_key_idstring""AWS account access key.
regionstring"us-east-1"AWS region for resource deployment.
secret_access_keystring""AWS account secret key.
session_tokenstringnullAWS session token.
azslist(string)nullAvailability zones for subnet creation.
chart_namestring"qrvey"Name of the chart to deploy.
chart_valueslist(object)[]Chart values (name, value, type).
create_vpc_endpointsbooltrueWhether to create VPC endpoints.
customer_infoobject{}Required. An object containing customer information.
deployment_idstring""Deployment ID (for migrations).
dns_zone_namestring""DNS zone name.
elasticsearchobject{}Existing Elasticsearch engine data (host, auth_user, auth_password, cluster_name, version). Use only when upgrading from v8 or older
elasticsearch_encryptionboolfalseEnable encryption for Elasticsearch. The flag must be added to the installation configuration. If not added, no encryption takes place.
enable_location_servicesboolfalseEnable location services.
enable_monitoringboolfalseEnable monitoring features. When generated, Qrvey environment details include Grafana credentials. For more information, see Configure Monitoring and Logging
enable_trinoboolfalseDeploy Trino Helm chart.
es_configobject{}In-cluster Elasticsearch (ECK) configuration (name, size, count, storage). Mutually exclusive with opensearch_config.
opensearch_configobject{}AWS OpenSearch Service configuration for a VPC-private managed domain. Mutually exclusive with es_config. For details, see opensearch_config.
globalizationobject{}Globalization settings (google_client_email, google_client_private_key, and so on).
initial_admin_emailstring""Required. Initial admin email.
intra_subnets_cidrslist(string)["10.110.201.0/24", "10.110.202.0/24"]Intra subnets.
openai_api_keystring"sk-xxxxxxxxxxxxxxxxxxxxxx"OpenAI API key.
postgresql_configobject{}PostgreSQL config (name, instance_class, version).
private_subnets_cidrslist(string)["10.110.1.0/24", "10.110.2.0/24", "10.110.32.0/20", "10.110.48.0/20"]Private subnets.
public_subnets_cidrslist(string)["10.110.101.0/24", "10.110.102.0/24"]Public subnets.
qrvey_chart_versionstring""Required. Qrvey chart version.
rabbitmq_service_internalbooltrueUse internal RabbitMQ service (true for ServiceIP, false for LoadBalancer).
rabbitmq_replica_countnumber3Number of replicas for the RabbitMQ cluster. If you are upgrading from v9.1.x to 9.2.2 or later, use this flag to keep the replicas at 3.
registry_keystring""Required. Qrvey registry key.
registry_userstring""Required. Qrvey registry user.
s3_bucketobject{}Existing S3 bucket configuration.
security_headersobject{}Allow customers to modify security headers in the HTTP response. For an example, see HTTP Response Security Headers.
single_az_modeboolfalseIf set to true, deploys all resources in a single Availability Zone (AZ). Set to false for multi-AZ deployments. Useful for controlling intra-AZ data transfer costs.
table_hierarchy_enabledboolfalseEnable table hierarchy feature.
trino_configobject{}Trino configuration (name, size, count).
use_athena_from_serverlessboolfalseUse Athena from serverless.
use_existing_vpcboolfalseUse an existing VPC.
use_public_subnet_for_dbboolfalseUse a public subnet for the database.
vpc_cidrstring"10.110.0.0/16"VPC CIDR block.
vpc_detailsobjectnullVPC details (vpc_id, public_subnets, private_subnets, intra_subnets).
dataload_configobject{}(Available v9.2.2) Configuration for dataset loading microservices. Allows setting min/max replicas for each datarouter pod. All properties are optional.
additional_cors_originsarray[](Available v9.2.4) Adds CORS (Cross-Origin Resource Sharing) support by allowing you to add domains to an allowlist for making cross-origin requests to your Qrvey instance. This variable is compatible with previous releases. For an example, see additional_cors_origins.

additional_cors_origins

"additional_cors_origins": [
"admin.example.com",
"partner.qrvey.com"
]

chart_values

[
{
"name": "string",
"value": "string",
"type": "string"
}
]

customer_info

{
"firstname": "string",
"lastname": "string",
"email": "string",
"company": "string"
}

elasticsearch

Note: Use this configuration only when you are upgrading from a Platform v8 or earlier to v9+

{
"host": "", // optional, default
"auth_user": "elastic", // optional, default
"auth_password": "", // optional, default
"cluster_name": "elasticsearch-es-internal-http.elastic-system.svc.cluster.local", // optional, default
"version": "7.10" // optional, default
}

es_config

Note: es_config and opensearch_config are mutually exclusive. Configure only one.

{
"name": "elasticsearch", // optional, default
"size": "medium", // optional, default
"count": 1, // optional, default
"storage": "200Gi" // optional, default
}

size Parameter Options

Sizenode_sizeJVM_MEMPOD_CPUPOD_MEM
smallm5.large2g14Gi
mediumr6i.large4g18Gi
larger6i.xlarge12g224Gi
xlarger6i.2xlarge18g435Gi
2xlarger6i.2xlarge24g452Gi
4xlarger6i.4xlarge31g24120Gi

opensearch_config

Note: opensearch_config and es_config are mutually exclusive. Configure only one.

Deploys a managed AWS OpenSearch Service domain inside the private subnets of your VPC. Authentication uses AWS Signature Version 4 (SigV4) — no username/password is required.

{
"enabled": true,
"engine_version": "Elasticsearch_7.10", // optional, default
"instance_type": "r6g.large.search", // optional, default
"instance_count": 2, // optional, default
"volume_size": 100, // optional, default; minimum 10 GB per node
"volume_type": "gp3", // optional, default
"dedicated_master_enabled": false, // optional, default
"dedicated_master_type": "r6g.large.search", // optional, default
"dedicated_master_count": 3, // optional, default
"zone_awareness_enabled": true, // optional, default
"encrypt_at_rest": true, // optional, default
"node_to_node_encryption": true, // optional, default
"create_service_linked_role": true // optional, default; set to false if the role already exists in your AWS account
}
PropertyTypeDefaultDescription
enabledBooleanfalseSet to true to create the VPC OpenSearch domain.
engine_versionstring"Elasticsearch_7.10"Engine version. Supported: Elasticsearch_7.10.
instance_typestring"r6g.large.search"Instance type for data nodes.
instance_countnumber2Number of data nodes (minimum 1).
volume_sizenumber100EBS volume size in GB per node (minimum 10).
volume_typestring"gp3"EBS volume type.
dedicated_master_enabledBooleanfalseEnable dedicated master nodes.
dedicated_master_typestring"r6g.large.search"Instance type for dedicated masters.
dedicated_master_countnumber3Number of dedicated master nodes.
zone_awareness_enabledBooleantrueDistribute nodes across availability zones.
encrypt_at_restBooleantrueEnable encryption at rest.
node_to_node_encryptionBooleantrueEnable node-to-node encryption.
create_service_linked_roleBooleantrueSet to false if the service-linked role already exists in your AWS account.

HTTP Response Security Headers

"security_headers": {
"content_security_policy": "default-src * 'unsafe-inline' 'unsafe-eval' data: blob:; script-src * 'unsafe-inline' 'unsafe-eval'; style-src * 'unsafe-inline';",
"x_frame_options": "SAMEORIGIN",
"cache_control": "no-cache, no-store, must-revalidate",
"referrer_policy": "unsafe-url"
}

globalization

{
"google_client_email": "", // optional, default
"google_client_private_key": "", // optional, default
"google_document_id": "", // optional, default
"google_document_sheet_title": "" // optional, default
}

postgresql_config

{
"name": "postgresql", // optional, default
"instance_class": "db.t3.medium", // optional, default
"version": "16.3" // optional, default
}

s3_bucket

{
"qrveyuserfiles": "", // optional, default
"use_cloudfront": "true", // optional, default
"drchunkdata": "", // optional, default
"drdatacommons": "", // optional, default
"drdatalake": "", // optional, default
"config": "", // optional, default
"basedatasets": "" // optional, default
}

trino_config

{
"name": "trino", // optional, default
"size": "small", // optional, default
"count": 2 // optional, default
}

vpc_details

{
"vpc_id": "string",
"public_subnets": ["string"],
"private_subnets": ["string"],
"intra_subnets": ["string"] // optional
}

For Existing VPCs

If you are using an existing VPC and subnets, certain tags required by Karpenter to create nodes are not automatically added.

  1. Manually add the following tag to your private subnets and the qrvey-eks-<deploymentid>-node security group:

    karpenter.sh/discovery : qrvey-eks-<deploymentid>
  2. Connect to the cluster and delete any failed Helm charts as needed before proceeding with the apply step.

dataload_config

The dataload_config object allows you to configure resource requests/limits and autoscaling for each microservice involved in dataset loading. All properties under dataload_config are optional. If you do not specify some properties, the system uses default values as shown in the following example (available in v9.2.2).

Note: Changing these properties directly impacts the data loading process. You can use these setting to manage performance. Increasing the maximum number of replicas can improve throughput, but also increases cloud costs. Adjust these values carefully based on your needs and budget.

{
"dr_file_pump": {
"resources": {
"requests": {
"memory": "768Mi",
"cpu": "15m"
},
"limits": {
"memory": "768Mi",
"cpu": "1500m"
}
},
"autoscaling": {
"min_replicas": 1,
"max_replicas": 2
}
},
"dr_db_pump": {
"resources": {
"requests": {
"memory": "512Mi",
"cpu": "15m"
},
"limits": {
"memory": "3072Mi",
"cpu": "1500m"
}
},
"autoscaling": {
"min_replicas": 1,
"max_replicas": 2
}
},
"dr_join_results_pump": {
"resources": {
"requests": {
"memory": "256Mi",
"cpu": "100m"
},
"limits": {
"memory": "1024Mi",
"cpu": "1"
}
},
"autoscaling": {
"min_replicas": 1,
"max_replicas": 10
}
},
"dr_transformation": {
"resources": {
"requests": {
"memory": "256Mi",
"cpu": "100m"
},
"limits": {
"memory": "2096Mi",
"cpu": "2"
}
},
"autoscaling": {
"min_replicas": 1,
"max_replicas": 5
}
},
"dr_put_chunk_to_lake": {
"resources": {
"requests": {
"memory": "256Mi",
"cpu": "50m"
},
"limits": {
"memory": "1536Mi",
"cpu": "1"
}
},
"autoscaling": {
"min_replicas": 1,
"max_replicas": 10
}
},
"dr_put_chunk_to_dl": {
"resources": {
"requests": {
"memory": "256Mi",
"cpu": "15m"
},
"limits": {
"memory": "1536Mi",
"cpu": "1"
}
},
"autoscaling": {
"min_replicas": 1,
"max_replicas": 10
}
}
}

Property Descriptions

  • Each top-level key (for example, dr_file_pump, dr_db_pump) represents a microservice involved in dataset loading.

  • resources: Specifies resource requests and limits for CPU and memory for each microservice pod.

    • requests: Minimum resources guaranteed for the pod.
    • limits: Maximum resources the pod can use.
  • autoscaling: Controls the minimum and maximum number of replicas for each microservice.

    • min_replicas: Minimum number of pods to run.
    • max_replicas: Maximum number of pods to run.

Troubleshooting

Services or Pods Not Starting When Spot Instances Disabled

In some new AWS accounts, Spot Instances can be disabled by default. After a new deployment, if you notice that services or pods are not coming up and there are no obvious errors, Spot Instances might not be enabled in your AWS account.

To enable Spot Instances, run the following AWS CLI command:

aws iam create-service-linked-role --aws-service-name spot.amazonaws.com

After running this command, retry your deployment.

Error: creating Security Group (vpc-endpoints-sg) when using an existing VPC

If you encounter the error:

Error: creating Security Group (vpc-endpoints-sg)

The VPC endpoints probably already exist in that VPC. To avoid this issue, set the variable create_vpc_endpoints to false in your config.json file under the variables section.

If you’re using an existing VPC, your config.json should look similar to this:

"variables": {
...
"create_vpc_endpoints": false,
"azs": ["zone-id-1", "zone-id-2"],
"use_existing_vpc": true,
"vpc_details": {
"vpc_id": "vpc-id",
"public_subnets": ["subnet-id-1", "subnet-id-2"],
"private_subnets": ["subnet-id-1", "subnet-id-2"],
"intra_subnets": ["subnet-id-1", "subnet-id-2"]
}
...
}

If you’re upgrading from a version before v9.2, make sure the variable single_az_mode is set to false.

Additional Resources

Deploy with Private EKS