A driver acts on the boundary between Artemis core and a given provisioning service. By implementing basic methods like "inspect VM" or "release resources", provides necessary level of polymorphism, allowing Artemis to switch transparently between pools as needed.
Capabilities are features whose support is built-in into the driver. The support is usually optional, and the feature may be disabled, but it is not possible to enable feature driver does not support.
supports-snapshots
: the driver can handle snapshots of some kind.supports-spot-instances
: the driver can handle spot instance requests.supports-native-post-install-script
: the driver can handle the post-installation script on its own. Artemis core will execute the script in the preparation stage for drivers that do no have this capability.Driver | supports-snapshots | supports-spot-instances | supports-native-post-install-script |
---|---|---|---|
aws | no | yes | yes |
azure | no | no | yes |
beaker | no | no | no |
localhost | no | no | no |
openstack | yes | no | yes |
A guest request can specify various HW constraints the provisioned machines must satisfy. For example, a desired number of CPU cores or a minimal root disk size. These constraints are eventually used by drivers to find - or create - suitable guests. Unfortunately, not all drivers are capable of handling all possible HW requirements, limitations may apply.
HW constraint | aws | azure | beaker | localhost | openstack |
---|---|---|---|---|---|
arch | yes | yes | yes | yes | yes |
boot.method | no | no | yes * | no | no |
cpu.cores | yes | no | yes | no | yes |
cpu.family | yes * | no | yes | no | yes * |
cpu.family_name | yes * | no | no | no | yes * |
cpu.model | yes * | no | yes * | no | yes * |
cpu.model_name | yes * | no | yes * | no | yes * |
cpu.processors | no | no | no | no | no |
disk[].space | partial See #1. * | no | partial See #1. | no | partial See #1. |
hostname | no | no | yes | no | no |
memory | yes | no | yes | no | yes |
network[].type | no | no | no | no | no |
virtualization.hypervisor | yes | no | no | no | no |
virtualization.is_supported | no | no | no | no | no |
virtualization.is_virtualized | yes | no | no | no | yes |
disk[]
supports only 1 item.Supported by: aws
azure
beaker
localhost
openstack
Term "flavor" represents one half of a template for a future guest. Flavors track various attributes that affect the virtual "hardware" of the final machine backing the guest. For example, number of processors, RAM size, or CPU family.
Term "image" represents the second half of a template for a future guest. Images holds the content of the future virtual machine, file system with installed software, kernel, configuration and so on. There are also less visible attributes tracked by pools, e.g. whether an image supports UEFI or not.
Together, flavors and images play crucial role in provisioning process, because the set of flavors and and the set of images represent various guest configurations a pool can deliver, and based on this information pools allocate actual cloud resources for a given request. It is up to maintainers to setup pools and pools' flavors and images to provide the tiers of service most suited for their workflow.
Pool drivers that work with flavors and images must keep track of known objects and their properties, This data must be kept up-to-date and reflect any kind of changes made by provisioning services backing their respective pools. For that purpose, drivers query their backend’s APIs periodically, to download the current state of objects available to them. This process is automated, controlled by Artemis core, and the gathered information is cached.
Information pool tracks for all available flavors can be modified through configuration, using the patch-flavors
and custom-flavors
directives. Each patch is applied to flavor or flavors matching given name (or regular expression), and overrides whatever the pool driver was able to collect from sources available to it in runtime.
Both directives share the same syntax, but their scope is slightly different:
custom-flavors
adds new flavors that do not exist as far as pool knows. For example, OpenStack driver can fetch list of existing flavors, custom-flavors
then allows maintainer to create additional flavors on top of this basic list.patch-flavors
modifies existing information known to pool, and does apply to flavors both real and created by custom-flavors
directive.custom-flavors:
- name: <string>
# Name of already existing flavor that would serve as a template.
# The flavor MUST exist, but it can be a custom flavor created before this patch.
base: <string>
cpu:
processors: <integer>
cores: <integer>
family: <family>
family_name: <string>
model: <integer>
model_name: <string>
flag:
- <string>
...
disk:
- size: <quantity>
model-name: <string>
# Or, to signal flavor can allocate additional disks
- additional-disks:
max-count: <integer>
min-size: <quantity>
max-size: <quantity>
model-name: <string>
...
virtualization:
is-supported: <boolean>
is-virtualized: <boolean>
hypervisor: <string>
custom-flavors:
- name: <string>
# Or, to patch multiple flavors at once:
name-regex: <pattern>
cpu:
processors: <integer>
cores: <integer>
family: <family>
family_name: <string>
model: <integer>
model_name: <string>
flag:
- <string>
...
disk:
- size: <quantity>
model-name: <string>
# Or, to signal flavor can allocate additional disks
- additional-disks:
max-count: <integer>
min-size: <quantity>
max-size: <quantity>
...
virtualization:
is-supported: <boolean>
is-virtualized: <boolean>
hypervisor: <string>
custom-flavors:
# Let's add two custom flavors, with specific disk sizes. Both are based
# on the same flavor, t2.small, and inherit all its properties.
#
# Also, all these flavors can get additional disks with actual size depending on a request.
- name: t2.small-20
base: t2.small
disk:
- size: 20 GiB
model-name: PERC H310
- additional-disks:
max-count: 5
min-size: 10 GiB
max-size: 1 TiB
- name: t2.small-40
base: t2.small
disk:
- size: 40 GiB
- additional-disks:
model-name: PERC H310
max-count: 5
min-size: 10 GiB
max-size: 1 TiB
patch-flavors:
# Now, patch all flavors, and set fields we can't extract from pool's backend API.
- name-regex: "t2\.small-\d+"
cpu:
family: 6
family_name: Haswell
model: 6
model_name: i7-something
flag:
- fpu
- vme
- de
...
# Oh, yes, all these flavors are VMs, not bare metal machines, and we support nested virtualization.
virtualization:
is-supported: true
is-virtualized: true
hypervisor: kvm
# While technically possible, let's not use our smallest flavor for nested virtualization - not enough disk space.
- name: t2.small-20
virtualization:
is-supported: false
Information pool tracks for all available images can be modified through configuration, using the patch-images
directive. Each patch is applied to image or images matching given name (or regular expression), and overrides whatever the pool driver was able to collect from sources available to it in runtime.
patch-images:
- name: <string>
# Or, to patch multiple images at once:
name-regex: <pattern>
ssh:
# Username to use when accessing guest based on this image via SSH
username: <string>
# Username to use when accessing guest based on this image via SSH
port: <integer>
patch-images:
# Reset the playing field: all images run SSH on port 22, and use `root` to log in.
- name-regex: ".*"
ssh:
username: root
port: 22
# For Fedora ones, we need different username.
- name-regex: "Fedora-.+"
ssh:
username: cloud-user
# And one single image is just weird and runs its SSH on a high port.
- name: Fedora-35
ssh:
port: 2222
The most up-to-date information on known flavors and images can be displayed by querying API:
$ http https://$hostname/_cache/pools/$poolname/image-info
$ http https://$hostname/_cache/pools/$poolname/flavor-info
It is also possible to trigger refresh of stored data with POST
method, with no data:
$ http POST https://$hostname/_cache/pools/$poolname/image-info
$ http POST https://$hostname/_cache/pools/$poolname/flavor-info
Supported by: aws
azure
beaker
localhost
openstack
Besides the operational logs related to guest provisioning, drivers often expose additional logs, usually related to the provisioning service actions or guest VM operations (terminal or console, output of dmesg
, etc.).
The actual list of logs supported by a pool depends on the driver - this is a hard limit, logs that driver does not support cannot be "enabled" - and pool configuration, where maintainers can disable particular logs on purpose.
Driver | Supported logs |
---|---|
aws | console/blob console/URL |
azure | - |
beaker | - |
localhost | - |
openstack | console/blob console/URL |
Each pool can tune down the supported set of guest logs: while it is not possible to enable logs that are not already supported by pool’s driver, it is still possible to disable supported logs, preventing users from accessing them.
capabilities:
disable-guest-logs:
- log-name: <string>
content-type: [blob|url]
capabilities:
disable-guest-logs:
# It's supported by driver, but maintainers do not wish to let users access live console of any guest from this pool.
- log-name: console
content-type: url
# Also, don't expose /var/log/messages - driver calls this log `messages`, and
# it's available only as a saved blob of text.
- log-name: messages
content-type: blob
Supported by: aws
azure
beaker
localhost
openstack
Pools can be marked as available only when requested by name, via environment.pool
field of the request. Such a pool would be ignored by the routing when processing requests that did not request the particular pool, making it effectively invisible for more relaxed requests.
use-only-when-addressed: <boolean> # default: false
- name: foo
driver: beaker
parameters:
# Pool "foo" is backed by a Beaker instance, and therefore usually takes longer to provision a machine. Let's
# make it available but only for users that are aware of this limitation, and ask for this pool directly.
use-only-when-addressed: true