Skip to content
This repository has been archived by the owner on Aug 26, 2021. It is now read-only.

User guide

Igor Stoyanov edited this page Sep 29, 2016 · 18 revisions

Intro

Admiral is highly scalable and very lightweight Container Management platform deploying and managing container based applications.

Admiral project capabilities are modeling, provisioning and managing containerized applications via UI, yaml based template or Docker Compose file, configuring and managing container hosts.

The main feature of the Admiral project will enable users to provision an app that is built from containers. Admiral makes use of the Docker Remote API to provision and manage containers, including retrieving stats and info about container instances. From a deployment perspective, developers will use Docker Compose, Admiral templates or Admiral UI to compose their app and deploy it using Admiral provisioning and orchestration engine. For cloud administrators, they will be able to manage container host infrastructure and apply governance to its usage, including grouping of resources, policy based placements, quotas and reservations and elastic placement zones.

Configure Container Hosts

The initial configuration step is related to configuring an already existing docker host (provisioning a new one is on the roadmap).

Configure Existing Container Docker Host

A step-by-step tutorial with screenshots: Getting started with VMware Admiral Container Service on Photon OS

To configure an already existing Docker Container host, navigate to "Hosts" and click on "Add Host".
Admiral communicates to the Docker Container Host via the Docker Remote API (Docker Remote API 1.19 - 1.24 currently tested with).

Hence, the Docker host should have enabled the docker remote API and all certificates configured properly. Quick way to enable docker remote API:

DOCKER_OPTS='-H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock'

Note: Admiral agent uses unix socket file for communication with the docker daemon. The path to the unix socket file should be /var/run/docker.sock.

The last post points to enabling Docker Remote API over http (not https), providing a specific port and NOT enabling Docker Remote API authentication (not recommended for production but ideal for initial evaluations and demos). In such a case, the configuration in Admiral should include the whole URL (including the schema - http, since the default one is https, including the port since the default port is 443 for https and 80 for http). The container host or IP should be entered in this case like:

http://{docker-host-ip}:4243

Since no authentication is provided, the "Credentials" in the form should remain empty and only ResourcePool should be configured beside the Docker host URL.

The configuration overall includes:

  • Container host IP or hostname in form (http|https) (IP/hostname) (: port). The default URL schema is "https" if not provided.
  • Credentials - for Docker remote API, the credentials are in form of Public/Private certificate.
  • Select or create Resource Pool - "Resource Pool" is logical grouping of resources that later on could be mapped to specific "Resource Groups" in order to apply deployment related policies.
  • (Optional) Select or create Deployment Policy. This is optional and not needed most of the time. "Deployment Policy" is basically a tag that can be used to influence the allocation and placement logic to specific resource groups or hosts matching the Deployment Policy tag.

Deployment Policies

Deployment policies can be linked to hosts, policies and container definitions. Their role is to set a preference for the specific hosts, policies and quotas when deploying a container. For example, if a specific policy was set to 2 out of 5 hosts and the same policy is set to a container being provisioned, then this container will have a preference for deployment on one of these 2 hosts. The same use-case is valid when the deployment policy is set to a policy - the container will have a preference to be placed on a host, linked to the resource pool for this quota. It is good to know that deployment policy tags on resource group policy have precedence over deployment policy tags on hosts. This means that if a policy is set to a resource group policy, and the same policy is set to a couple of hosts outside the resource group policy, the hosts from the quota will have precedence over the hosts which directly have the policy assigned to them.

To set a deployment policy to a host, create or edit the host and use the Deployment Policy field. If you click on “Manage” or “Create” options, a sidebar will be opened, where you can create, edit and delete your deployment policies. To set a deployment policy on a resource group policy, you will need to create or edit the resource group policy and then again use the Deployment Policy field.

When setting the deployment policy on a container, you need to open the container definition edit form. On the Policy tab, you can set the field Deployment Policy.

Policies and Placement

Resource Group Polices

A resource group policy is a way to limit and reserve resources used by a resource group. A resource group policy has the following fields:

  • Name - Group - the group which the policy applies to
  • Resource Pool - the resource pool which the policy gets the resources to be managed
  • Deployment Policy - matching deployment tag
  • Priority - There may be more than one policy per group. The priority specifies in what order the policies should be gone over when there are more than one
  • Instances - (Integer > 0) Maximum number of containers provisioned
  • Memory Limit - A number between 0 and the memory available in the resource pool. This is the total memory available for resources in this policy. 0 means no limit
  • CPU shares - the provisioned resource will be given this amount of CPU shares

Each policy is assigned a resource pool. A resource pool is basically a set of hosts and the available resources in a resource pool are the sum of all the resources of the hosts inside it. Thus, transitively, a policy manages the resources of a set of hosts. More than one policy may manage a single resource pool. The policies that manage a resource pool cannot collectively reserve more resources than what's in it.

When a container is provisioned the policies are filtered based on the resource group, available resources (instances, memory) and priority. This is the first step of the host selection process. After we pick a resource group policy we continue with the placement procedure.

Placement

Hosts are filtered based on their power state and their available memory. Note: In order to filter hosts based on available memory memory limit must be set in the container description of the container to be provisioned. Then the affinity filters are applied. The affinity filters are active only when provisioning an application and not when provisioning a single container.

Affinity Filters

Affinity filters are used to filter hosts based on relationships between containers in the same application. The user may set two containers to be provisioned on the same hosts or on different hosts. In addition, the rules may be hard or soft. In the "Provision Container" form under "Policy" the user may set explicitly affinity/anti-affinity constraints. An affinity constraint consists of the following:

  • Affinity type - (affinity or anti-affinity) In case of anti-affinity the containers will be placed on different hosts, otherwise they will be placed on the same host
  • Service name - the name of the other container
  • Constraint type - (hard or soft) A hard rule means that if there is no way to satisfy the constraint the provisioning should fail. If the rule is soft the provisioning should continue.

Affinity constraints may be set implicitly depending on other settings:

  • Cluster Size - If cluster size is bigger than 1 the placement engine will try to spread the multiple containers among as many hosts as it can
  • Volumes From - If "Volumes From" is set the container will be placed on the same host as the one that is getting its volumes from
  • If multiple containers expose the same host port they will be placed on different hosts
  • Containers that have the same pod will be placed on the same host

Templates, Images & Registries

Registries

A Docker Registry is a stateless, server side application that stores and lets you distribute Docker images. You can configure multiple registries to gain access to both public and private images. This can be done under Templates > Manage Registry. To configure a registry, you need to provide its address, a custom registry name, and credentials (optional). The address should start with http(s) scheme to designate whether the registry is secured or unsecured. If no scheme is provided, https is assumed by default. Port defaults to 5000. You can also choose to enable or disable registries at any time to include or exclude their results from your image search.

The container service can interact with both Docker Registry HTTP API V1 and V2 with the following specifics:

Known supported third-party registries: JFrog Artifactory, Harbor. Docker Hub is enabled by default for all tenants and is not present in the registry list. However, it can be disabled with a system property.

Docker does not normally interact with secure registries configured with certificates signed by unknown authority. The container service handles this case by automatically uploading untrusted certificates to all docker hosts thus enabling the hosts to connect to these registries. In case a certificate cannot be uploaded to a given host, the host is automatically disabled. More info at https://docs.docker.com/registry/insecure/#/using-self-signed-certificates

Images

Creating a single container from an image is no different than doing it from the Docker CLI. You have the ability to search images through the registries that you have defined (see section 4.1). Once you have found the image you want to create a container from, you can do a single click provisioning, which creates a container based on the latest tag of the image, attached to the Bridge network and publishing all exposed ports. You can also provision a container by providing additional info. This will take you to a form where you can provide most of the known Docker API properties as well as additional properties like affinity rules, health config, links.

Templates

In addition to provisioning a container from an image, you can also create a template from this image. A template is a reusable configuration for provisioning a container or a suite of containers. This is where you can define a multi-tier application consisting of different linked services (service - one or more container of the same type/image).

Creating templates

Templates can be created by:

  • Starting from a base container, by selecting an image, then saving this container definition a template and adding additional containers/service along way.
  • Importing a YAML file. You can click the import button, which allows you to either provide the contents of the YAML as text or browse through the filesystem to upload a YAML file. The YAML represents the template, the configuration for the different containers and their connections. Supported format types are:

Provisioning templates

Templates can be provisioned like the single container images with a catalog-like experience. Based on a variety of properties of the template and the whole environment, the containers of the template will be provisioned on one or more hosts. For more info see Policies and Placement . Once a template is provisioned it is shown as an application in which you can drill down on to the single container.

Exporting templates

Templates can be exported to a file in the same 2 formats, that are supported in importing - Docker compose and Container Service's YAML format. You can import a template from one format, modify it the UI and export it in another format. However, have in mind that some of the configuration that are specific to the Container Service, like health config, affinity constrains, etc. will not be included if you export in docker compose format. More info on Docker Compose support...

Containers & Applications

Container configurations

When defining a single container or a mutli-container application, in addition to all known Docker container properties you can also configure

Health Config

To have the container status updated based on a custom criteria, you can configure a health check method. This can be done under the Health Config tab in the Container Definition form. You can choose between the following methods of health check: HTTP, TCP and executing a command on the container. No health checks are configured by default.

  • HTTP - when HTTP option is set, you will have to provide an API to access and HTTP method and version to use. The API is relative, i.e. you don't need to enter the container's address. You can also specify timeout for the operation and thresholds healthy/unhealthy status. Healthy threshold of 2 means that there will be 2 successive successful calls needed in order for the container to be considered healthy (status RUNNING). Unhealthy threshold of 2 means that there will be 2 unsuccessful calls needed in order for the container to be considered unhealthy (status ERROR). For all the states in between the healthy and unhealthy thresholds the container's status will DEGRADED.
  • TCP - when TCP option is set, only port will be required. The health check will try to establish TCP connection with the container on the provided port. The options Timeout, Health Threshold and Unhealthy Threshold are the same as in HTTP mode.
  • Command - when Command option is set, you will be requested to enter a command to be executed on the container. The success of the health check will be determined by the exit status of the command.

Networking

Admiral supports native Docker networking in the form of

When designing your application you can specify which one to use and mix and match. For example you can specify a user defined network element and explicitly set it's driver, e.g. driver bridge for single host or overlay for multi-host. If none is specified Admiral will take into consideration the capabilities of the environment, and if possible create a multi-host overlay network or fallback to a single host bridge network.

In order to have an overlay network over you host, your hosts need to be setup in a cluster using an external key-value store. For more info how to set it up see here overlay-network-with-an-external-key-value-store

Cluster size and Scale in/out

Users have the ability to create clusters of containers by setting the cluster size field in the container provisioning form under "Policy". This means that Bellevue will provision as many containers of that type as specified and requests will be load balanced among all containers in the cluster. Users, also, are allowed to modify the cluster size on an already provisioned container/application by clicking the + and - icons in the container's grid tile. This will respectively increase and decrease the size of the cluster by 1. When modifying the cluster size at runtime all affinity filters and placement rules are taken into account.