Auto Attach is designed to join the correct quantity of the correct pools with a consumer in order to fully provide as many products as possible.
This document is intended to provide details on the Auto Attach algorithm. The first section is a set of diagrams each intended to explain in greater detail the Auto Attach algorithm. The second section provides a more intricate textual description of the process.
The following diagrams show how SLA affected the auto attach algorithm up to candlepin version 2.4 and how that changed from 3.1 onwards. From Candlepin 3.1 onwards, SLAs are only used to prioritize pools.
Below is a more detailed and technical description of the Auto Attach process. If you are looking for details on why Auto Attach chose a particular set of pools, start with the diagrams
The rules.js is passed a context, which contains important data such as the consumer, service level, pools available, products to cover, and a compliance status at the time healing will take place
We can safely remove all pools from this list that:
During validation, pools that do not allow multi-entitlement (stacking of a single entitlement) have their available quantity set to 1, so that they can be considered the same way as multi-entitlement pools.
Next we must get the list of attached entitlements, which are used to complete partial stacks, or let us know if a stack cannot be completed. These entitlements are added to the list of entitlements whenever a compliance check is run.
The list of installed products, comes directly from the context, but we filter out products that are already found in the compliance status “compliantProducts”. The consumer’s specified role and addons are also filtered based on if they are already satisfied by any of the already attached entitlements in the compliance status.
Entitlement group objects provide a way to treat stacks and single entitlements interchangeably. They consume pools and provide products. Pools are put into entitlement groups by stacking_id, whereas single entitlements are 1:1 with entitlement groups.
Once we have built all of the entitlement groups, we must validate them to make sure it is possible to make them cover our system. If a group is invalid, it is discarded, because it cannot be useful for the system.
If the entitlement group is not a stack, we can check overall validity based on whether the single pool (max available quantity) is compliant.
Otherwise, in the stacked case, it is a little bit more difficult. In some situations, a partial stack can be made compliant by removing entitlements. For example, if I have a stack of 4 socket entitlements, and add one entitlement that covers 1GB of ram, my entire stack will be partial. We first need to check if the stack, with already-attached subscriptions, is compliant. If it is, we can say the group is valid. However if it is not compliant, we must remove all pools that enforce stackable attributes that have caused problems (taken from compliance reason keys) After removing those pools, we run a final compliance check, the result of which is the groups validity.
For each entitlement group, attempt to remove all pools that enforce each combination of attributes. Use the group of pools that has the most virt_only pools if the consumer is a guest, otherwise the set with the fewest pools. This prevents us from attaching “parallel” stacks, where we essentially have a compliant sockets stack, and a compliant cores stack.
Remove all pools from the group that are not required to make the stack/product fully compliant. This can be skipped for non-stack groups, because they only have 1 pool, and at this point we know the group is required to cover a product (or role or addon). We sort the pools based on priority, which is calculated based on a multitude of attributes, syspurpose and non-syspurpose (see the section Pool Priority Algorithm for details), so those are preserved as long as possible. Try removing one pool at a time, and check compliance with everything else (that has not been removed) at max quantity. If the stack is compliant and covers all the same products, roles and addons, disregard the removed pool, otherwise add it back.
Firstly, we try to complete partial stacks, if possible. If there is a stack group with the same stack_id, it is possible (because it passed validation).
This uses a greedy approach, rather than enumeration. While a valid group exists that covers a non-covered product (or role, or addon), add the group that covers the most products to “best groups”
Ties are broken by (in this order):
This ensures that we are covering every product, role and addon that is possible to cover, with no excessive groups (you cannot remove any stack or unstackable entitlement and remain compliant).
This is very similar to prune pools, except we know that every pool is necessary, and we want the minimum quantity per pool. loop over pools again, adjusting quantity to consume, starting at the minimum increment, up to max available, until the stack is compliant. return a map of pool id to pool quantity for candlepin to interpret.
When a guest attempts to Auto Attach, the host will first attempt to Auto Attach, with some significant changes:
TODO: Do we need to consider quantity here? If there is already a virt_only pool, but it’s fully used, it would be nice to add more. However whatever we add must not be stacked with what’s already there, otherwise we won’t actually be adding any new guest entitlements. (due to one sub-pool per stack) First draft, it is probably best to ignore quantity / usage, if there’s a virt_only pool, we’ve done what we can. If it’s fully consumed, this code will not help the guest out.
Caveats:
This is not a step that is called as a sequence of the previous algorithm steps, but it is rather called during 2 of the previous steps that were mentioned: the Prune Pools step and the Select Best Entitlement Groups.
It is calculating the priority score for a single pool (a stack of pools is scored by the average priority of all its pools). Each pool/product attribute is generating a numerical score (which might be positive, negative, or 0). Each attribute has a weight assigned, that is supposed to be larger than the sum of all less-important attributes combined (e.g. service_type: 350 > requires_host+virt_only+sockets+ram+cores+vcpu: 330).
Attributes are of three types:
Here is an example calculation of a pool’s score, based on the attributes set on a consumer and the pool:
The host to be auto-attached in a specific way such that the guest can use the resulting derived pools.
This is done by accumulating pools to use based on the guest’s installed products.
The possible pools for the host to attach will be determined as follows:
Pool filtering is done in the JavaScript rules based on parameters of the consumer and pool. Each is applied if and only if the parameter exists on the pool or the pool product.
virt_only: remove if not a guest consumer.
physical_only: remove if a guest consumer.
unmapped_guest_only: remove if not a guest or if guest has a known host.
requires_host: remove if consumer is not a guest or if it does not have the specific host.
requires_consumer: remove if not the specific consumer.
requires_consumer_type: remove if not the specific consumer type.
vcpu: remove if consumer vcpu count exceeds the product’s count [unless stackable].
architecture: remove if the consumer’s arch is not in the product’s arch list.
sockets: remove if consumer socket count exceeds the product’s count [unless stackable].
cores: remove if consumer core count exceeds the product’s count [unless stackable].
ram: remove if consumer ram count exceeds the product’s count [unless stackable].
storage_band: remove if consumer band storage count exceeds the product’s count [unless stackable].
instance_multiplier: remove if quantity requested does not divide evenly by the pools instance multiplier.
The following consumer to pool compatibilities are checked: