Browse Source

Added chart for nfs-server-provisioner

master
Fábio Kaiser Rauber 3 years ago
parent
commit
79bc6b6048
  1. 18
      charts/nfs-server-provisioner/v1.4.0/Chart.yaml
  2. 211
      charts/nfs-server-provisioner/v1.4.0/README.md
  3. 26
      charts/nfs-server-provisioner/v1.4.0/templates/NOTES.txt
  4. 43
      charts/nfs-server-provisioner/v1.4.0/templates/_helpers.tpl
  5. 34
      charts/nfs-server-provisioner/v1.4.0/templates/clusterrole.yaml
  6. 14
      charts/nfs-server-provisioner/v1.4.0/templates/priorityclass.yaml
  7. 19
      charts/nfs-server-provisioner/v1.4.0/templates/rolebinding.yaml
  8. 106
      charts/nfs-server-provisioner/v1.4.0/templates/service.yaml
  9. 11
      charts/nfs-server-provisioner/v1.4.0/templates/serviceaccount.yaml
  10. 146
      charts/nfs-server-provisioner/v1.4.0/templates/statefulset.yaml
  11. 28
      charts/nfs-server-provisioner/v1.4.0/templates/storageclass.yaml
  12. 112
      charts/nfs-server-provisioner/v1.4.0/values.yaml

18
charts/nfs-server-provisioner/v1.4.0/Chart.yaml

@ -0,0 +1,18 @@
apiVersion: v1
appVersion: 3.0.0
description: nfs-server-provisioner is an out-of-tree dynamic provisioner for Kubernetes. You can use it to quickly & easily deploy shared storage that works almost anywhere.
name: nfs-server-provisioner
version: 1.4.0
maintainers:
- name: kiall
email: kiall@macinnes.ie
- name: kvaps
email: kvapss@gmail.com
- name: joaocc
email: joaocc-dev@live.com
home: https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner
sources:
- https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner/tree/HEAD/charts/nfs-server-provisioner
keywords:
- nfs
- storage

211
charts/nfs-server-provisioner/v1.4.0/README.md

@ -0,0 +1,211 @@
# NFS Server Provisioner
[NFS Server Provisioner](https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner)
is an out-of-tree dynamic provisioner for Kubernetes. You can use it to quickly
& easily deploy shared storage that works almost anywhere.
This chart will deploy the Kubernetes [external-storage projects](https://github.com/kubernetes-incubator/external-storage)
`nfs` provisioner. This provisioner includes a built in NFS server, and is not intended for connecting to a pre-existing
NFS server. If you have a pre-existing NFS Server, please consider using the [NFS Client Provisioner](https://github.com/kubernetes-incubator/external-storage/tree/HEAD/nfs-client)
instead.
## TL;DR;
```console
$ helm install stable/nfs-server-provisioner
```
> **Warning**: While installing in the default configuration will work, any data stored on
the dynamic volumes provisioned by this chart will not be persistent!
## Introduction
This chart bootstraps a [nfs-server-provisioner](https://github.com/kubernetes-incubator/external-storage/tree/HEAD/nfs)
deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh)
package manager.
## Installing the Chart
To install the chart with the release name `my-release`:
```console
$ helm repo add nfs-ganesha-server-and-external-provisioner https://kubernetes-sigs.github.io/nfs-ganesha-server-and-external-provisioner/
$ helm install my-release nfs-ganesha-server-and-external-provisioner/nfs-server-provisioner
```
The command deploys nfs-server-provisioner on the Kubernetes cluster in the default
configuration. The [configuration](#configuration) section lists the parameters
that can be configured during installation.
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
```console
$ helm delete my-release
```
The command removes all the Kubernetes components associated with the chart and
deletes the release.
## Configuration
The following table lists the configurable parameters of the kibana chart and
their default values.
| Parameter | Description | Default |
|:-------------------------------|:----------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------|
| `extraArgs` | [Additional command line arguments](https://github.com/kubernetes-incubator/external-storage/blob/HEAD/nfs/docs/deployment.md#arguments) | `{}`
| `imagePullSecrets` | Specify image pull secrets | `nil` (does not add image pull secrets to deployed pods) |
| `image.repository` | The image repository to pull from | `k8s.gcr.io/sig-storage/nfs-provisioner:v3.0.0` |
| `image.tag` | The image tag to pull | `v3.0.0` |
| `image.digest` | The image digest to pull, this option has precedence over `image.tag` | `nil` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `service.type` | service type | `ClusterIP` |
| `service.nfsPort` | TCP port on which the nfs-server-provisioner NFS service is exposed | `2049` |
| `service.mountdPort` | TCP port on which the nfs-server-provisioner mountd service is exposed | `20048` |
| `service.rpcbindPort` | TCP port on which the nfs-server-provisioner RPC service is exposed | `111` |
| `service.nfsNodePort` | if `service.type` is `NodePort` and this is non-empty, sets the nfs-server-provisioner node port of the NFS service | `nil` |
| `service.mountdNodePort` | if `service.type` is `NodePort` and this is non-empty, sets the nfs-server-provisioner node port of the mountd service | `nil` |
| `service.rpcbindNodePort` | if `service.type` is `NodePort` and this is non-empty, sets the nfs-server-provisioner node port of the RPC service | `nil` |
| `persistence.enabled` | Enable config persistence using PVC | `false` |
| `persistence.storageClass` | PVC Storage Class for config volume | `nil` |
| `persistence.accessMode` | PVC Access Mode for config volume | `ReadWriteOnce` |
| `persistence.size` | PVC Storage Request for config volume | `1Gi` |
| `storageClass.create` | Enable creation of a StorageClass to consume this nfs-server-provisioner instance | `true` |
| `storageClass.provisionerName` | The provisioner name for the storageclass | `cluster.local/{release-name}-{chart-name}` |
| `storageClass.defaultClass` | Whether to set the created StorageClass as the clusters default StorageClass | `false` |
| `storageClass.name` | The name to assign the created StorageClass | `nfs` |
| `storageClass.allowVolumeExpansion` | Allow base storage PCV to be dynamically resizeable (set to null to disable ) | `true |
| `storageClass.parameters` | Parameters for StorageClass | `{}` |
| `storageClass.mountOptions` | Mount options for StorageClass | `[ "vers=3" ]` |
| `storageClass.reclaimPolicy` | ReclaimPolicy field of the class, which can be either Delete or Retain | `Delete` |
| `resources` | Resource limits for nfs-server-provisioner pod | `{}` |
| `nodeSelector` | Map of node labels for pod assignment | `{}` |
| `tolerations` | List of node taints to tolerate | `[]` |
| `affinity` | Map of node/pod affinities | `{}` |
| `podSecurityContext` | Security context settings for nfs-server-provisioner pod (see https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod) | `{}` |
| `priorityClass.create` | Enable creation of a PriorityClass resource for this nfs-server-provisioner instance | `false` |
| `priorityClass.name` | Set a PriorityClass name to override the default name | `""` |
| `priorityClass.value` | PriorityClass value. The higher the value, the higher the scheduling priority | `5` |
```console
$ helm install nfs-server-provisioner nfs-ganesha-server-and-external-provisioner/nfs-server-provisioner \
--set=image.tag=v1.0.8,resources.limits.cpu=200m
```
Alternatively, a YAML file that specifies the values for the above parameters
can be provided while installing the chart. For example,
```console
$ helm install nfs-server-provisioner nfs-ganesha-server-and-external-provisioner/nfs-server-provisioner -f values.yaml
```
> **Tip**: You can use the default [values.yaml](values.yaml) as an example
## Persistence
The nfs-server-provisioner image stores it's configuration data, and importantly, **the dynamic volumes it
manages** `/export` path of the container.
The chart mounts a [Persistent Volume](http://kubernetes.io/docs/user-guide/persistent-volumes/)
volume at this location. The volume can be created using dynamic volume
provisioning. However, **it is highly recommended** to explicitly specify
a storageclass to use rather than accept the clusters default, or pre-create
a volume for each replica.
If this chart is deployed with more than 1 replica, `storageClass.defaultClass=true`
and `persistence.storageClass`, then the 2nd+ replica will end up using the 1st
replica to provision storage - which is likely never a desired outcome.
## Recommended Persistence Configuration Examples
The following is a recommended configuration example when another storage class
exists to provide persistence:
persistence:
enabled: true
storageClass: "standard"
size: 200Gi
storageClass:
defaultClass: true
On many clusters, the cloud provider integration will create a "standard" storage
class which will create a volume (e.g. a Google Compute Engine Persistent Disk or
Amazon EBS volume) to provide persistence.
---
The following is a recommended configuration example when another storage class
does not exist to provide persistence:
persistence:
enabled: true
storageClass: "-"
size: 200Gi
storageClass:
defaultClass: true
In this configuration, a `PersistentVolume` must be created for each replica
to use. Installing the Helm chart, and then inspecting the `PersistentVolumeClaim`'s
created will provide the necessary names for your `PersistentVolume`'s to bind to.
An example of the necessary `PersistentVolume`:
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-nfs-server-provisioner-0
spec:
capacity:
storage: 200Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
fsType: "ext4"
pdName: "data-nfs-server-provisioner-0"
claimRef:
namespace: kube-system
name: data-nfs-server-provisioner-0
---
The following is a recommended configration example for running on bare metal with a hostPath volume:
persistence:
enabled: true
storageClass: "-"
size: 200Gi
storageClass:
defaultClass: true
nodeSelector:
kubernetes.io/hostname: {node-name}
In this configuration, a `PersistentVolume` must be created for each replica
to use. Installing the Helm chart, and then inspecting the `PersistentVolumeClaim`'s
created will provide the necessary names for your `PersistentVolume`'s to bind to.
An example of the necessary `PersistentVolume`:
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-nfs-server-provisioner-0
spec:
capacity:
storage: 200Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /srv/volumes/data-nfs-server-provisioner-0
claimRef:
namespace: kube-system
name: data-nfs-server-provisioner-0
> **Warning**: `hostPath` volumes cannot be migrated between machines by Kubernetes, as such,
in this example, we have restricted the `nfs-server-provisioner` pod to run on a single node. This
is unsuitable for production deployments.

26
charts/nfs-server-provisioner/v1.4.0/templates/NOTES.txt

@ -0,0 +1,26 @@
The NFS Provisioner service has now been installed.
{{ if .Values.storageClass.create -}}
A storage class named '{{ .Values.storageClass.name }}' has now been created
and is available to provision dynamic volumes.
You can use this storageclass by creating a `PersistentVolumeClaim` with the
correct storageClassName attribute. For example:
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-dynamic-volume-claim
spec:
storageClassName: "{{ .Values.storageClass.name }}"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
{{ else -}}
A storage class has NOT been created. You may create a custom `StorageClass`
resource with a `provisioner` attribute of `{{ include "nfs-provisioner.provisionerName" . }}`.
{{ end -}}

43
charts/nfs-server-provisioner/v1.4.0/templates/_helpers.tpl

@ -0,0 +1,43 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "nfs-provisioner.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "nfs-provisioner.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "nfs-provisioner.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "nfs-provisioner.provisionerName" -}}
{{- if .Values.storageClass.provisionerName }}
{{- printf .Values.storageClass.provisionerName }}
{{- else -}}
cluster.local/{{ include "nfs-provisioner.fullname" . }}
{{- end }}
{{- end }}

34
charts/nfs-server-provisioner/v1.4.0/templates/clusterrole.yaml

@ -0,0 +1,34 @@
{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "nfs-provisioner.fullname" . }}
labels:
app: {{ include "nfs-provisioner.name" . }}
chart: {{ include "nfs-provisioner.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "delete", "update", "patch"]
{{- end }}

14
charts/nfs-server-provisioner/v1.4.0/templates/priorityclass.yaml

@ -0,0 +1,14 @@
{{- if .Values.priorityClass.create -}}
kind: PriorityClass
apiVersion: scheduling.k8s.io/v1
metadata:
name: {{ .Values.priorityClass.name | default (include "nfs-provisioner.fullname" .) }}
labels:
app: {{ include "nfs-provisioner.name" . }}
chart: {{ include "nfs-provisioner.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
value: {{ .Values.priorityClass.value }}
globalDefault: false
description: "This priority class should be used for nfs-provisioner pods only."
{{- end }}

19
charts/nfs-server-provisioner/v1.4.0/templates/rolebinding.yaml

@ -0,0 +1,19 @@
{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app: {{ include "nfs-provisioner.name" . }}
chart: {{ include "nfs-provisioner.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ include "nfs-provisioner.fullname" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "nfs-provisioner.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ include "nfs-provisioner.fullname" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

106
charts/nfs-server-provisioner/v1.4.0/templates/service.yaml

@ -0,0 +1,106 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "nfs-provisioner.fullname" . }}
labels:
app: {{ include "nfs-provisioner.name" . }}
chart: {{ include "nfs-provisioner.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.nfsPort }}
targetPort: nfs
protocol: TCP
name: nfs
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nfsNodePort))) }}
nodePort: {{ .Values.service.nfsNodePort }}
{{- end }}
- port: {{ .Values.service.nfsPort }}
targetPort: nfs-udp
protocol: UDP
name: nfs-udp
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nfsNodePort))) }}
nodePort: {{ .Values.service.nfsNodePort }}
{{- end }}
- port: {{ .Values.service.nlockmgrPort }}
targetPort: nlockmgr
protocol: TCP
name: nlockmgr
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nlockmgrNodePort))) }}
nodePort: {{ .Values.service.nlockmgrNodePort }}
{{- end }}
- port: {{ .Values.service.nlockmgrPort }}
targetPort: nlockmgr-udp
protocol: UDP
name: nlockmgr-udp
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nlockmgrPort))) }}
nodePort: {{ .Values.service.nlockmgrNodePort }}
{{- end }}
- port: {{ .Values.service.mountdPort }}
targetPort: mountd
protocol: TCP
name: mountd
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.mountdNodePort))) }}
nodePort: {{ .Values.service.mountdNodePort }}
{{- end }}
- port: {{ .Values.service.mountdPort }}
targetPort: mountd-udp
protocol: UDP
name: mountd-udp
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.mountdNodePort))) }}
nodePort: {{ .Values.service.mountdNodePort }}
{{- end }}
- port: {{ .Values.service.rquotadPort }}
targetPort: rquotad
protocol: TCP
name: rquotad
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.rquotadNodePort))) }}
nodePort: {{ .Values.service.rquotadNodePort }}
{{- end }}
- port: {{ .Values.service.rquotadPort }}
targetPort: rquotad-udp
protocol: UDP
name: rquotad-udp
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.rquotadNodePort))) }}
nodePort: {{ .Values.service.rquotadNodePort }}
{{- end }}
- port: {{ .Values.service.rpcbindPort }}
targetPort: rpcbind
protocol: TCP
name: rpcbind
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.rpcbindNodePort))) }}
nodePort: {{ .Values.service.rpcbindNodePort }}
{{- end }}
- port: {{ .Values.service.rpcbindPort }}
targetPort: rpcbind-udp
protocol: UDP
name: rpcbind-udp
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.rpcbindNodePort))) }}
nodePort: {{ .Values.service.rpcbindNodePort }}
{{- end }}
- port: {{ .Values.service.statdPort }}
targetPort: statd
protocol: TCP
name: statd
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.statdPort))) }}
nodePort: {{ .Values.service.statdNodePort }}
{{- end }}
- port: {{ .Values.service.statdPort }}
targetPort: statd-udp
protocol: UDP
name: statd-udp
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.statdPort))) }}
nodePort: {{ .Values.service.statdNodePort }}
{{- end }}
{{- with .Values.service.clusterIP }}
clusterIP: {{ . }}
{{- end }}
{{- with .Values.service.externalIPs }}
externalIPs:
{{- toYaml . | nindent 4 }}
{{- end }}
selector:
app: {{ include "nfs-provisioner.name" . }}
release: {{ .Release.Name }}

11
charts/nfs-server-provisioner/v1.4.0/templates/serviceaccount.yaml

@ -0,0 +1,11 @@
{{- if .Values.rbac.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: {{ include "nfs-provisioner.name" . }}
chart: {{ include "nfs-provisioner.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ include "nfs-provisioner.fullname" . }}
{{- end }}

146
charts/nfs-server-provisioner/v1.4.0/templates/statefulset.yaml

@ -0,0 +1,146 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "nfs-provisioner.fullname" . }}
labels:
app: {{ include "nfs-provisioner.name" . }}
chart: {{ include "nfs-provisioner.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
# TODO: Investigate how/if nfs-provisioner can be scaled out beyond 1 replica
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ include "nfs-provisioner.name" . }}
release: {{ .Release.Name }}
serviceName: {{ include "nfs-provisioner.fullname" . }}
template:
metadata:
labels:
app: {{ include "nfs-provisioner.name" . }}
chart: {{ include "nfs-provisioner.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
# NOTE: This is 10 seconds longer than the default nfs-provisioner --grace-period value of 90sec
terminationGracePeriodSeconds: 100
serviceAccountName: {{ if .Values.rbac.create }}{{ include "nfs-provisioner.fullname" . }}{{ else }}{{ .Values.rbac.serviceAccountName | quote }}{{ end }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
{{- with .Values.image }}
image: "{{ .repository }}{{ if .digest }}@{{ .digest }}{{ else }}:{{ .tag }}{{ end }}"
imagePullPolicy: {{ .pullPolicy }}
{{- end }}
ports:
- name: nfs
containerPort: 2049
protocol: TCP
- name: nfs-udp
containerPort: 2049
protocol: UDP
- name: nlockmgr
containerPort: 32803
protocol: TCP
- name: nlockmgr-udp
containerPort: 32803
protocol: UDP
- name: mountd
containerPort: 20048
protocol: TCP
- name: mountd-udp
containerPort: 20048
protocol: UDP
- name: rquotad
containerPort: 875
protocol: TCP
- name: rquotad-udp
containerPort: 875
protocol: UDP
- name: rpcbind
containerPort: 111
protocol: TCP
- name: rpcbind-udp
containerPort: 111
protocol: UDP
- name: statd
containerPort: 662
protocol: TCP
- name: statd-udp
containerPort: 662
protocol: UDP
securityContext:
capabilities:
add:
- DAC_READ_SEARCH
- SYS_RESOURCE
args:
- "-provisioner={{ include "nfs-provisioner.provisionerName" . }}"
{{- range $key, $value := .Values.extraArgs }}
- "-{{ $key }}={{ $value }}"
{{- end }}
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SERVICE_NAME
value: {{ include "nfs-provisioner.fullname" . }}
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: data
mountPath: /export
{{- with .Values.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if (or .Values.priorityClass.name .Values.priorityClass.create) }}
priorityClassName: {{ .Values.priorityClass.name | default (include "nfs-provisioner.fullname" .) | quote }}
{{- end }}
{{- if not .Values.persistence.enabled }}
volumes:
- name: data
emptyDir: {}
{{- end }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ {{ .Values.persistence.accessMode | quote }} ]
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: {{ .Values.persistence.storageClass | quote }}
{{- end }}
{{- end }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
{{- end }}

28
charts/nfs-server-provisioner/v1.4.0/templates/storageclass.yaml

@ -0,0 +1,28 @@
{{- if .Values.storageClass.create -}}
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: {{ .Values.storageClass.name }}
labels:
app: {{ include "nfs-provisioner.name" . }}
chart: {{ include "nfs-provisioner.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.storageClass.defaultClass }}
annotations:
storageclass.kubernetes.io/is-default-class: "true"
{{- end }}
provisioner: {{ include "nfs-provisioner.provisionerName" . }}
reclaimPolicy: {{ .Values.storageClass.reclaimPolicy }}
{{- if .Values.storageClass.allowVolumeExpansion }}
allowVolumeExpansion: {{ .Values.storageClass.allowVolumeExpansion }}
{{- end }}
{{- with .Values.storageClass.parameters }}
parameters:
{{- toYaml . | nindent 2 }}
{{- end }}
{{- with .Values.storageClass.mountOptions }}
mountOptions:
{{- toYaml . | nindent 2 }}
{{- end }}
{{- end }}

112
charts/nfs-server-provisioner/v1.4.0/values.yaml

@ -0,0 +1,112 @@
# Default values for nfs-provisioner.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
# imagePullSecrets:
image:
repository: k8s.gcr.io/sig-storage/nfs-provisioner
tag: v3.0.0
# digest:
pullPolicy: IfNotPresent
# For a list of available arguments
# Please see https://github.com/kubernetes-incubator/external-storage/blob/HEAD/nfs/docs/deployment.md#arguments
extraArgs: {}
# device-based-fsids: false
# grace-period: 0
service:
type: ClusterIP
nfsPort: 2049
nlockmgrPort: 32803
mountdPort: 20048
rquotadPort: 875
rpcbindPort: 111
statdPort: 662
# nfsNodePort:
# nlockmgrNodePort:
# mountdNodePort:
# rquotadNodePort:
# rpcbindNodePort:
# statdNodePort:
# clusterIP:
externalIPs: []
persistence:
enabled: false
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessMode: ReadWriteOnce
size: 1Gi
## For creating the StorageClass automatically:
storageClass:
create: true
## Set a provisioner name. If unset, a name will be generated.
# provisionerName:
## Set StorageClass as the default StorageClass
## Ignored if storageClass.create is false
defaultClass: false
## Set a StorageClass name
## Ignored if storageClass.create is false
name: nfs
# set to null to prevent expansion
allowVolumeExpansion: true
## StorageClass parameters
parameters: {}
mountOptions:
- vers=3
## ReclaimPolicy field of the class, which can be either Delete or Retain
reclaimPolicy: Delete
## For RBAC support:
rbac:
create: true
## Ignored if rbac.create is true
##
serviceAccountName: default
## For creating the PriorityClass automatically:
priorityClass:
## Enable creation of a PriorityClass resource for this nfs-server-provisioner instance
create: false
## Set a PriorityClass name to override the default name
name: ""
## PriorityClass value. The higher the value, the higher the scheduling priority
value: 5
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
Loading…
Cancel
Save