Configuration
Configurations affecting how your platform gets built.
Keeping with the theme of building an opinionated platform, most configurations are optional.
When field with the type Object
are optional, you can omit them entirely, or use {}
. For such fields, the defaults are be specified in their own spec table.
CLI Spec
environmentName
String
The name of the environment that will be created and managed by the CLI. This name is also used in naming a lot of resources, like the eks
cluster, the vpc
and so on.
This name should follow these rules:
All lower-case
Alpha-numeric, with
-
and_
allowedMust start with a letter
Not more than 16 characters in length
Yes
aws
Object
- AWS Spec
Configurations for all components deployed on AWS, including vpc
, eks
and so on
Yes
kubernetes
Object
- Kubernetes Spec
Configurations for all services and operators running on the kubernetes cluster
No
AWS Spec
region
String
The name of the AWS region where you want to deploy your infrastructure. The name is the canonical name of the region like us-east-2
or eu-west-3
Yes
eks
Object
- EKS Spec
Configurations for the eks
cluster, and related components like node-groups and logging
No
routing
Object
- Routing Spec
Configurations related to enabling public access for your services. Including Route53
zone creation, ACM
TLS certificate and so on
No
VPC Spec
cidr
String
The IPv4
network range for the VPC, in CIDR notation.
No
10.8.0.0/16
privateSubnets
Array[
Subnet Spec
]
Configuration for the private subnets to be attached to the VPC. These subnets are where all the eks
workloads (your service pods) will be deployed
No
- cidr: "10.8.8.0/21"
- cidr: "10.8.16.0/21"
publicSubnets
Array[
Subnet Spec
]
Configuration for the private subnets to be attached to the VPC. Used for allowing public ingress/egress out of your Kubernetes cluster.
No
- cidr: "10.8.1.0/26"
- cidr: "10.8.2.0/26"
EKS Spec
nodeGroups
Array[
EKSNodeGroup Spec]
Configuring the node-groups for the EKS clusters. Most people won't need to override this value since none of your workloads would be deployed on this node-group. This node-group only exists to launch Karpenter and External Secrets Operator. Nodes for the rest of the workloads are provisioned by Karpenter.
No
- name: "notops-default"
instanceType: "t3.medium"
minNodes: 2
maxNodes: 20
capacityType: "SPOT"
amiType: "BOTTLEROCKET_x86_64"
logging
Object
- EKSLogging Spec
Configurations for Cloudwatch Logging for the EKS cluster
No
logTypes:
- "API"
- "AUDIT"
retentionDays: 7
Routing Spec
The SSL cert is created and managed using AWS Certificate Manager. These certificates are free, and can be renewed automatically. There's also an option for you to import certificates bought from third-party providers.
createSSLCert
Boolean
Whether CLI should create TLS/SSL certificate in AWS Certificate Manager
No
true
sslCertARN
String
AWS Certificate Manager ARN for an existing certificate. This is imported from third-party certificate providers you may have already purchased from for your domain. This certificate is assumed to be pre-validated.
Follow these steps for importing certificate to AWS Certificate Manager.
This config should only be set if createSSLCert
flag is set to false
Yes, If createSSLCert
is set to false
domainName
String
The public DNS domain-name for to use with your services. This is usually the top-level domain for your organizations, like example.com
Yes
enableWildcardSubdomains
Boolean
If the SSL cert should allow wild-card for sub-domains. If the domain you are using is example.com
, setting this to true
will allow *.example.com
No
true
subjectAlternativeNames
Array[String]
Additional values, most commonly hostnames, to attach to the SSL certificate, using Subject Alternative Names
No
[]
createHostedZoneForDomain
Boolean
Whether to create a public Route53 hosted-zone for your domain. If your domain is managed by a third-party provider, e.g. namecheap or godaddy, set this to false
. Additionally, you can set this to true
and configure your domain-provider with the nameservers from AWS
No
true
Subnet Spec
cidr
String
The IPv4
network range for this subnet, in CIDR notation. This CIDR must be contained within the CIDR for the VPC to which this subnet is going to be attached.
Yes
EKSNodeGroup Spec
name
String
The name of this node-group. If you create multiple node-groups, all node-group names must be unique within an EKS cluster
This name should follow these rules:
At most 63 characters in length.
Must start with a letter or digit, but can also include hyphens and underscores for the remaining characters
Yes
instanceType
String
One of the available EC2 instance types to use for the nodes in this node group. For example, t3.large
or m7g.12xlarge
. See all available types here
Yes
minNodes
Integer
The minimum number of worker nodes. Must be a number greater than 0.
Yes
maxNodes
Integer
The maximum number of worker nodes. Must be larger than or equal to minNodes
Yes
capacityType
String
One of the following values
SPOT
ON_DEMAND
Yes
amiType
String
One of the following values
AL2_x86_64
AL2_x86_64_GPU
AL2_ARM_64
BOTTLEROCKET_ARM_64
BOTTLEROCKET_x86_64
BOTTLEROCKET_ARM_64_NVIDIA
BOTTLEROCKET_x86_64_NVIDIA
Yes
EKSLogging Spec
Amazon EKS control plane logging provides audit and diagnostic logs directly from the Amazon EKS control plane to CloudWatch Logs in your account. In clusters created by NotOps, this logging is enabled by default for certain components.
logTypes
Array[String]
The EKS control-plane components for which to generate logs. Each log-type corresponds to a control-plane component. To learn more about these components, see Kubernetes Components in the Kubernetes documentation. Following log-types are available:
API
: Enables logs for the kube-apiserver componentAUDIT
: Kubernetes audit logs provide a record of the individual users, administrators, or system components that have affected your cluster. See auditing for more detailsAUTHENTICATOR
: Authenticator logs are unique to Amazon EKS. These logs represent the control plane component that Amazon EKS uses for Kubernetes Role Based Access Control (RBAC) authentication using IAM credentialsCONTROLLER_MANAGER
: The controller manager manages the core control loops that are shipped with Kubernetes. For more information, see kube-controller-managerSCHEDULER
: The scheduler component manages when and where to run Pods in your cluster. For more information, see kube-scheduler
Yes. You can specify an empty array to disable all control-plane logs
retentionDays
Integer
The number of days to retain the control-plane logs in Cloudwatch. Must be one of the values from 1
, 3
, 5
, 7
, 14
, 30
, 60
, 90
, 120
, 150
, 180
, 365
, 400
, 545
, 731
, 1096
, 1827
, 2192
, 2557
, 2922
, 3288
, 3653
Yes
Kubernetes Spec
global
Object
- K8sGlobal Spec
The shared config that will be applied to all services installed by the CLI
No
{}
argocd
Object
Configuration for the Argo CD Helm Chart. All values from the chart's default values.yaml are supported and can be overridden
No
redis-ha:
enabled: false
controller:
replicas: 1
server:
autoscaling:
enabled: true
minReplicas: 1
repoServer:
autoscaling:
enabled: true
minReplicas: 1
applicationSet:
enabled: false
awsLoadBalancerController
Object
Configuration for the AWS Load Balancer Controller Helm Chart. All values from the chart's default values.yaml are supported and can be overridden
No
replicaCount: 1
externalSecrets
Object
Configuration for the External Secrets Helm Chart. All values from the chart's default values.yaml are supported and can be overridden
No
# Without this, the generated names look like "release-name-external-secrets"
fullnameOverride: "external-secrets"
istio
Object
Configuration for the following Istio helm charts:
The three charts here have namespaced configs and a shared config under a key global
. Because the configs are namespaced, we can provide a single config object that works with all three.
No
global:
priorityClassName: system-cluster-critical
##############################################################################################################
# CNI chart section from https://artifacthub.io/packages/helm/istio-official/cni/1.20.2?modal=values
##############################################################################################################
cni:
enabled: true
chained: true # it's true by default, but we want to make it explicit
##############################################################################################################
# Istio Discovery Chart section from https://artifacthub.io/packages/helm/istio-official/istiod/1.20.2?modal=values
##############################################################################################################
meshConfig:
accessLogFile: /dev/stdout
# This has to be configured separately from the "cni" section
# https://istio.io/latest/docs/setup/additional-setup/cni/#installing-with-helm
istio_cni:
enabled: true
chained: true # it's true by default, but we want to make it explicit
istioIngressGateway
Object
Creates an Istio Gateway for routing ingress traffic into the Kubernetes cluster, using Istio Gateway Helm Chart. All values from the chart's default values.yaml are supported and can be overridden
No
name: "istio-ingressgateway"
service:
ports:
- name: http
port: 443
protocol: TCP
targetPort: 80
annotations:
"service.beta.kubernetes.io/aws-load-balancer-scheme": "internet-facing"
"service.beta.kubernetes.io/aws-load-balancer-type": "external"
"service.beta.kubernetes.io/aws-load-balancer-nlb-target-type": "ip"
"service.beta.kubernetes.io/aws-load-balancer-healthcheck-port": "15021"
"service.beta.kubernetes.io/aws-load-balancer-healthcheck-path": "/healthz/ready"
karpenter
Object
Configuration for the Karpenter Helm Chart. All values from the chart's default values.yaml are supported and can be overridden
No
fullnameOverride: "karpenter"
# Need karpenter to be very reliable
# It already has default settings for topology spread constraints and node affinity/anti-affinity to spread it across
# AZs (if multiple AZs are available)
replicas: 2
K8sGlobal Spec
secrets
Object
- K8sSecrets Spec
Global secrets to use with all Kubernetes services deployed by the CLI. For example, secrets that allow pulling from a private docker registry
No
{}
K8sDocker Spec
registryUrl
String
The docker registry URL where you have mirrored the images for all the Kubernetes services deployed by the CLI
If this is a private registry, you must specify the imagePullSecret
config that provides the credentials for pulling from that registry
No
K8sSecrets Spec
dockerImagePull
Array[
K8sImagePullSecret Spec
]
A list of secrets to be used for pulling container images from a private registry. At least one secret must be specified if a private registry is being used to host the images to be deployed by the CLI If more than one secret is specified, they will be tried in order until one succeeds.
No
{}
K8sImagePullSecret Spec
name
String
The name of the secret. If you provide multiple secrets, each must have a unique name. This name will also be used to create a Kubernetes Secret, and must be a valid name for such objects.
Yes
providerType
String
One of:
AWS_SECRETS_MANAGER
(only one provider supported at the moment)
Yes
K8sImagePullSecretConfig Spec
This config object will differ based on the type of the Secret Provider used. See K8sImagePullSecret Spec for a list of supported providers.
AWS_SECRETS_MANAGER
path
String
The name
of the secret in AWS Secrets Manager. This usually looks like a file-system path. For example /my-team/service-x/database
The name must contain between 1 and 512 characters
Yes
Last updated