Skip to main content

Docker Swarm Tutorial

974 words·
Docker Docker-Swarm Commands

Create Docker Swarm Cluster
#

For the Docker Swarm Cluster I used 5 Ubuntu Servers on which I installed Docker with the apt package manager.

This are the names of the Ubuntu Hosts:

jkw-mgr-1	192.168.30.188 # Manager Node
jkw-mgr-2	192.168.30.187 # Manager Node
jkw-wkr-1	192.168.30.189 # Worker Node
jkw-wkr-2	192.168.30.190 # Worker Node
jkw-wkr-3	192.168.30.191 # Worker Node

Create Manager Node
#

Swarm Manager Nodes maintain cluster management tasks and have high availability, one or more can fail. Only one of them is ever considered active and will issue commands against the swarm. Swarm Manager Nodes also act as Worker Nodes.

Run Command on the first Manager Node and initialize a new Swarm:

docker swarm init \
--advertise-addr 192.168.30.188:2377 \
--listen-addr 192.168.30.188:2377

Shell Output:

Swarm initialized: current node (yj5xv9m0jz8crjh1liytv0n1x) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-60cyxdnrjrh9td7gmv7014desya07mymr2juqqn2d3znnofoyc-01hwaciaa35gxe15hult4t1ga 192.168.30.188:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Create Join Tokens
#

Note: The commands to join a worker and a manager are identical apart from the join tokens.

# Create Token to add Worker Nodes
docker swarm join-token worker
# Create Token to add Manager Nodes
docker swarm join-token manager

Shell Output:

To add a manager to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-60cyxdnrjrh9td7gmv7014desya07mymr2juqqn2d3znnofoyc-c00lc3zrsvsfvsmoyqff5gufe 192.168.30.188:2377

Add Manager Nodes
#

Run this Command on the Manager Node that you want to add:

# Command
docker swarm join \
--token YourManagerToken \
First-ManagerNodeIP:2377 \
--advertise-addr New-ManagerNodeIP:2377 \
--listen-addr New-ManagerNodeIP:2377

# Example
docker swarm join \
--token SWMTKN-1-60cyxdnrjrh9td7gmv7014desya07mymr2juqqn2d3znnofoyc-c00lc3zrsvsfvsmoyqff5gufe \
192.168.30.188:2377 \
--advertise-addr 192.168.30.187:2377 \
--listen-addr 192.168.30.187:2377

Add Worker Nodes
#

Worker Nodes receive and executes tasks from the Manager Nodes.

Run Command on the Worker Node you want to add:

# Command
docker swarm join \
--token YourWorkerToken \
First-ManagerNodeIP:2377 \
--advertise-addr New-WorkerNodeIP:2377 \
--listen-addr New-WorkerNodeIP:2377

# Example
docker swarm join \
--token SWMTKN-1-60cyxdnrjrh9td7gmv7014desya07mymr2juqqn2d3znnofoyc-01hwaciaa35gxe15hult4t1ga \
192.168.30.188:2377 \
--advertise-addr 192.168.30.189:2377 \
--listen-addr 192.168.30.189:2377

Shell Output:

This node joined a swarm as a worker.

List Swarm Nodes
#

Run the following Command on any of the Manager Nodes:

# List Nodes in the Swarm
docker node ls

Shell Output:

ID                            HOSTNAME    STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
yj5xv9m0jz8crjh1liytv0n1x *   jkw-mgr-1   Ready     Active         Leader           24.0.2
uuh9n76r0shcofldbocynxtty     jkw-mgr-2   Ready     Active         Reachable        24.0.2
twujif6709bu95szwoby7fvel     jkw-wkr-1   Ready     Active                          24.0.2
orhmtdyw1f9ykqjdc0namdjhb     jkw-wkr-2   Ready     Active                          24.0.2
a67q7coghdbsijlrmohbwr7gi     jkw-wkr-3   Ready     Active                          24.0.2

#Legend
* Shows from which node the "docker node ls" command was run from
Manager Status:
Empty = Worker Node
Leader = Manager Node
Reacheable = Other Manager Nodes

Manage Docker Swarm
#

Enable Auto Lock
#

This option forces Manager Nodes that have been restarted to present the Cluster Unlock Key before being permitted back into the Cluster.

# Enable Auto Lock: Store the Unlock Key!
docker swarm update --autolock=true


# Unlock Manager Node
docker swarm unlock
#Please enter unlock key: (Provide the Unlock Key)

Swarm Services
#

Replication Mode
#

  • Replicated: Deploy a defined number of replicas as evenly as possible across the Cluster.
    --replicas 5
  • Global: Deploy a Single Replica on every Node in the Swarm.
    --mode global

Create Service
#

Create Service: Exmaple

# Create new service: Example
docker service create --name service1 \
-p 8080:8080 \
--replicas 5 \
nigelpoulton/pluralsight-docker-ci

List Services:

# List running services
docker service ls

#Shell Output
ID             NAME       MODE         REPLICAS   IMAGE                                       PORTS
0ul3ubj00s2f   service1   replicated   5/5        nigelpoulton/pluralsight-docker-ci:latest   *:8080->8080/tcp

List Service Deployment Details / Service Replicas State:

docker service ps service1

#Shell Output
ID             NAME         IMAGE                                       NODE        DESIRED STATE   CURRENT STATE           ERROR     PORTS
jnejtbqcxy8b   service1.1   nigelpoulton/pluralsight-docker-ci:latest   jkw-wkr-2   Running         Running 6 minutes ago
o9uexr650pd6   service1.2   nigelpoulton/pluralsight-docker-ci:latest   jkw-wkr-3   Running         Running 6 minutes ago
kktyw4y5k37b   service1.3   nigelpoulton/pluralsight-docker-ci:latest   jkw-mgr-2   Running         Running 6 minutes ago
qnrat9nj8bpk   service1.4   nigelpoulton/pluralsight-docker-ci:latest   jkw-mgr-1   Running         Running 6 minutes ago
mmn79rdwrish   service1.5   nigelpoulton/pluralsight-docker-ci:latest   jkw-wkr-1   Running         Running 6 minutes ago

Update Service
#

#Create Service
docker service create --name service-name \
-p 80:80 \
--replicas 5 \
repository/image:tagversion1

# Update Service
docker service update \
--image repository/image:tagversion2 \
--update-parallelism 2 \
--update-delay 20s service-name

Note: With this settings the Image Updates a rolled out at 2 replicas at a time with a 20 second delay between the rollout. Once the update is rolled out, the update parallelism and update delay settings are part of the service definition and automatically used for the next update, till they are overwritten. You can check the update settings of the Service with:
docker inspect --pretty service-name


Scale Service
#

# Define Number of Service Replicas
docker service scale service-name=8
# Check Service Replicas
docker service ps service1

#Shell Output:
ID             NAME         IMAGE                                       NODE        DESIRED STATE   CURRENT STATE            ERROR     PORTS
jnejtbqcxy8b   service1.1   nigelpoulton/pluralsight-docker-ci:latest   jkw-wkr-2   Running         Running 24 minutes ago
o9uexr650pd6   service1.2   nigelpoulton/pluralsight-docker-ci:latest   jkw-wkr-3   Running         Running 24 minutes ago
kktyw4y5k37b   service1.3   nigelpoulton/pluralsight-docker-ci:latest   jkw-mgr-2   Running         Running 24 minutes ago
qnrat9nj8bpk   service1.4   nigelpoulton/pluralsight-docker-ci:latest   jkw-mgr-1   Running         Running 24 minutes ago
mmn79rdwrish   service1.5   nigelpoulton/pluralsight-docker-ci:latest   jkw-wkr-1   Running         Running 24 minutes ago
z65ekcabcclq   service1.6   nigelpoulton/pluralsight-docker-ci:latest   jkw-wkr-1   Running         Running 18 seconds ago
pwl88w279kmw   service1.7   nigelpoulton/pluralsight-docker-ci:latest   jkw-wkr-2   Running         Running 18 seconds ago
tgejhgi1h1sd   service1.8   nigelpoulton/pluralsight-docker-ci:latest   jkw-mgr-1   Running         Running 18 seconds ago

Docker Swarm Commands
#

# Create new Service in Swarm
docker service create 

# Delete Service from Swarm
docker service rm service-name


# List running Services
docker service ls

# Service Deployment Details / Service Replicas State
docker service ps service-name

# Service Details
docker service inspect service-name
# Service Details: Limit Output
docker service inspect --pretty service-name 

# List Service Logs
docker service logs service-name


# Scale the number of Service Replicas up and down
docker service scale service-name=10

Docker Stack Deploy
#

Add the Deploy section to the Docker Compose configuration to define:

version: "3.9"
services:
  redis:
    image: redis:alpine
    deploy:
      replicas: 6 # Define Service Replicas in the Cluster
      placement:
        max_replicas_per_node: 1
      update_config: # Define Update Policy
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
```shell