patching
Table of Contents
- Module description
- Setup
- Architecture
- Design
- Patching Workflow
- Usage
- Configuration Options
- Reference
- Limitations
- Development
- Contributors
Module description
A framework for building patching workflows. This module is designed to be used as building blocks for complex patching environments of Windows and Linux (RHEL, Ubuntu) systems.
No Puppet agent is required on the end targets. The node executing the patching will need to
have bolt
installed.
Setup
Setup Requirements
Module makes heavy use of bolt, you'll need to install it to get started. Install instructions are here.
If you want to use the patching::snapshot_vmware
plan/function then you'll
need the rbvmomi gem installed in the
bolt ruby environment:
/opt/puppetlabs/bolt/bin/gem install --user-install rbvmomi
Quick Start
cat << EOF >> ~/.puppetlabs/bolt/Puppetfile
mod 'puppetlabs/stdlib'
mod 'encore/patching'
EOF
bolt puppetfile install
bolt plan run patching::available_updates --targets group_a
# install rbvmomi for VMware snapshot support
/opt/puppetlabs/bolt/bin/gem install --user-install rbvmomi
Architecture
This module is designed to work in enterprise patching environments.
Assumptions:
- RHEL targets are registered to Satellite / Foreman or the internet
- Ubuntu targets are registered to Landscape or the internet
- Windows targets are registered to WSUS and Chocolatey (optional)
Registration to a central patching server is preferred for speed of software downloads and control of phased patching promotions.
At some point in the future we will include tasks and plans to promote patches through these central patching server tools.
Design
patching
is designed around bolt
tasks and plans.
Individual tasks have been written to accomplish targeted steps in the patching process.
Examples: patching::available_updates
is used to check for available updates on targets.
Plans are then used to pretty up output and tie tasks together.
This way end users can use the tasks and plans as build blocks to create their own custom patching workflows (we all know, there is no such thing as one size fits all).
For more info on tasks and plans, see the Usage and Reference sections.
Going further, many of the settings for the plans are configurable by setting vars
on your groups in the bolt inventory file.
For more info on customizing settings using vars, see the Configuration Options section
Patching Workflow
Our default patching workflow is implented in the patching
plan patching/init.pp.
This workflow consists of the following phases:
- Organize inventory into groups, in the proper order required for patching
- For each group...
- Check for available updates
- Disable monitoring
- Snapshot the VMs
- Pre-patch custom tasks
- Update the host (patch)
- Post-patch custom tasks
- Reboot that require a reboot
- Delete snapshots
- Enable monitoring
Usage
Check for available updates
This will reach out to all targets in group_a
in your inventory and check for any available
updates through the system's package manager:
- RHEL = yum
- Ubuntu = apt
- Windows = Windows Update + Chocolatey (if installed)
bolt plan run patching::available_updates --targets group_a
Disable monitoring
Prior to performing the snapshotting and patching steps, the plan will disable monitoring alerts in SolarWinds (by default).
This plan/task utilizes the remote
transport []
bolt plan run patching::monitoring_solarwinds --targets group_a action=disable' monitoring_target=solarwinds
Create snapshots
This plan will snapshot all of the hosts in VMware. The name of the VM in VMware is assumed to
be the uri
of the node the inventory file.
/opt/puppetlabs/bolt/bin/gem install rbvmomi
bolt plan run patching::snapshot_vmware --targets group_a action='create' vsphere_host='vsphere.domain.tld' vsphere_username='xyz' vsphere_password='abc123' vsphere_datacenter='dctr1'
Perform pre-patching checks and actions
This plan is designed to perform custom service checks and shutdown actions before
applying patches to a node.
If you have custom actions that need to be perform prior to patching, place them in the
pre_update
scripts and this plan will execute them.
Best practice is to define and distribute these scripts as part of your normal Puppet code
as part of othe role for that node.
bolt plan run patching::pre_update --targets group_a
By default this executes the following scripts (targets where the script doesn't exist are ignored):
- Linux =
/opt/patching/bin/pre_update.sh
- Windows =
C:\ProgramData\patching\pre_update.ps1
Deploying pre/post patching scripts
An easy way to deploy pre/post patching scripts is via the patching
Puppet manifest or the patching::script
resource.
Using the patching
class:
class {'patching':
scripts => {
'pre_patch.sh': {
content => template('mymodule/patching/custom_app_post_patch.sh'),
},
'post_patch.sh': {
source => 'puppet:///mymodule/patching/custom_app_post_patch.sh',
},
},
}
Via patching::script
resources:
patching::script { 'custom_app_pre_patch.sh':
content => template('mymodule/patching/custom_app_pre_patch.sh'),
}
patching::script { 'custom_app_post_patch.sh':
source => 'puppet:///mymodule/patching/custom_app_post_patch.sh',
}
Or via Hiera:
patching::scripts:
custom_app_pre_patch.sh:
source: 'puppet:///mymodule/patching/custom_app_pre_patch.sh'
custom_app_post_patch.sh:
source: 'puppet:///mymodule/patching/custom_app_post_patch.sh'
Run a the full patching workflow end-to-end
Organize the inventory into groups:
patching::ordered_groups
Then, for each group:
patching::cache_updates
patching::available_updates
patching::snapshot_vmware action='create'
patching::pre_update
patching::update
patching::post_update
patching::reboot_required
patching::snapshot_vmware action='delete'
bolt plan run patching --targets group_a
Patching with Puppet Enterprise (PE)
When executing patching with Puppet Enterprise Bolt will use the pcp
transport.
This transport has a default timeout of 1000
seconds. Windows patching is MUCH
slower than this and the timeouts will need to be increased.
If you do not modify this default timeout, you may experience the following error
in the patching::update
task or any other long running task:
Starting: task patching::update on windowshost.company.com
Finished: task patching::update with 1 failure in 1044.63 sec
The following hosts failed during update:
[{"target":"windowshost.company.com","action":"task","object":"patching::update","status":"failure","result":{"_output":"null","_error":{"kind":"puppetlabs.tasks/task-error","issue_code":"TASK_ERROR","msg":"The task failed with exit code unknown","details":{"exit_code":"unknown"}}},"node":"windowshost.company.com"}]
Below is an example bolt.yaml
with the settings modified:
---
pcp:
# 2 hours = 120 minutes = 7,200 seconds
job-poll-timeout: 7200
For a complete reference of the available settings for the pcp
transport see
bolt configuration reference
documentation.
Configuration Options
This module allows many aspects of its runtime to be customized using configuration options in the inventory file.
For details on all of the available configuration options, see REFERENCE_CONFIGURATION.md
Example: Let's say we want to prevent some targets from rebooting during patching.
This can be customized with the patching_reboot_strategy
variable in inventory:
groups:
- name: no_reboot_nodes
vars:
patching_reboot_strategy: 'never'
targets:
- abc123.domain.tld
- def4556.domain.tld
Reference
See REFERENCE.md
Limitations
This module has been tested on the following operating systems:
- Windows
- 2008
- 2012
- 2016
- RHEL
- 6
- 7
- 8
- Ubuntu
- 16.04
- 18.04
Development
See DEVELOPMENT.md