Candlepin has a general purpose eventing/messaging model, primarily used for auditing (logging of changes to consumers), but ideally it can be used for integration with other systems, email notification, irc bots, etc.



We use HornetQ messaging in Candlepin to take care of delivering events to listeners. For “free”, it gets us:

  • guaranteed message delivery
  • message persistence in case of server crash.

The HornetQ server runs inside Candlepin’s servlet context; clients communicate with it via the InVM transport.

We serialize Events into HornetQ messages via JSON.

HornetQ throttling and paging

HornetQ stores events in-memory or on hard disk. How much memory or how much hard disk it uses is based on configuration. See JavaDoc of the config options in ConfigProperties.java. The most important options are:


By correctly configuring these three options you can achieve one of the following limiting behaviors:

  • Sizes of HornetQ in-memory queues (in megabytes) are upper bounded and threads that try to add more events into a queue are blocked. This limiting behavior is called throttling.
  • Sizes of HornetQ in-memory queues (in megabytes) are upper bounded and any new Events are stored on disk instead of in-memory. This limiting behavior is called paging.

It is important to note that regardless of paging or throttling, events bigger then HORNETQ_LARGE_MSG_SIZE (in bytes) will always go to disk (thus may cause out of disk space). So sometimes it may be reasonable to set HORNETQ_LARGE_MSG_SIZE to a high number to ensure that no messages fall to the large category.

For example the following config will put upper bound of 50MB to every queue in Hornet. It will block threads that try to add more events into queues. Thanks to high HORNETQ_LARGE_MSG_SIZE, it will also not page any messages < 1,000,000 bytes to disk.



  • Simple base class we use to represent an event.
  • Carries data about:
    • Principal performing the action.
    • Date/time of the event.
    • Target type of the event: consumer, owner, entitlement
    • Type of the event: created, updated, deleted
    • ID of the owner of the object
    • ID of the object, combined will the event type, will allow queries for any specific object and event.
    • JSON blobs for old state of the object and new state of the object.
      • Can be loaded into a detached model object with Jackson.
  • We will still need history timestamps on the actual entities themselves, as the event history table could grow large and need to be periodically archived.

Event Factory

  • helper class for Resources/Curators to create events with the proper type/target.
  • serializes entities to JSON

Event Sink

  • Used by Resources/Curators to emit an event.

Event Source

  • registers listeners to receive events


  • Object adhering to simple interface to process an event.
  • must implement void onEvent(Event e); called for each event


We have two listeners in candlepin, both of which are enabled by default:

Logs events to the db in the cp_event table. The old and new entity fields are not logged.
Logs events to a file. Configurable via:
  • candlepin.audit.log_file - name of file to log to. defaults to /var/log/candlepin/audit.log
  • candlepin.audit.log_verbose - boolean option to log old and new entity fields. defaults to false (don’t log the old/new fields)


Candlepin emits the following events:

  1. Consumer Created
  2. Consumer Deleted
  3. Consumer Consumes From a Pool
  4. Consumer Stops Consuming From a Pool
  5. Owner is Created
  6. Owner is Terminated
  7. New Pool is Created for an Owner
  8. Pool Quantity Changes

We plan to implement:

  1. Consumer Facts Updated (not implemented in CP yet?)
Last modified on 19 April 2024