Category Archives: OpenStack

oslo messaging and notifications

novajoin needs to monitor nova notifications to know when an instance is deleted so the host can be removed from IPA as well. I originally coded it to use the notifications topic but ceilometer also uses this topic so novajoin-notify was getting only a subset of the deletes.

The fix is very easy, add a new topic to the topics option in nova.conf, e.g.


This isn’t at all obvious from any documentation.

novajoin and the 2016 OpenStack Barcelona Summit

The nova v2 vendordata (dynamic vendordata plugins) came up at the 2016 OpenStack Summit in Barcelona. It was expected at the outset to be a fairly short meeting since the two proposals were seemingly straightfoward: add authentication and caching. It ended up using all 40 minutes but in the end wasn’t all that contentious.

It was decided that authentication would be sent in all cases to the configured vendordata plugins. There was some brief discussion about whether the user token would be included but as I took it, nova would use its own (or some pre-configured) credential. This will require some paste configuration changes in novajoin but authentication should otherwise be supported.

The metadata server will also cache responses. Exactly how long, what, etc is TBD. If I remember I’ll update this post with the gerrit review link once it comes out.

The idea is to commit to master than back port to Newton stable.

The contentious part was related to that user token I mentioned. Adam Young from Keystone wanted that token to be sent along, even if expired, so one could know the roles of the user that kicked things off. The problem of course is that the user is just a snapshot in time. Roles change. Users are deleted. Apparently some users completely hammer on the metadata service today, some as frequently as every few minutes. At some point things could break if that user went away.

I was ambivalent about it. Adam’s point was that it could be used for access control which is a good idea. I think that if the roles were cached instead of the user, that might make more sense. But even then people would complain that they revoked or added access and the user can/can’t do things. It’s a no-win I think. I kept my mouth shut in any case.

In the end this is good news for novajoin. I was quite uncomfortable having unauthenticated requests at all (e.g. metadata requests from an instance) so that’ll go away soon.

The caching will solve the problems I had bending over backwards with the IPA OTP. There could still be problems if the time to enroll the instance > nova metadata cache so I’ll probably leave in my “last update wins” code, but this does make things a bit more predictable and will certainly be faster.

CentOS 7, cloud-init and DataSourceConfigDriveNet

Something I ran into when developing the novajoin service was that my cloud-init script was not executed if either force_config_drive was True in nova.conf or if config_drive was enabled for a particular instance. What I’d see is that no metadata would come across and cloud-init would do very little work at all beyond adding keypairs and configuring networking.

The image I was working on was CentOS-7-x86_64-GenericCloud.qcow2 (1511). If I used a similar RHEL 7 image things worked fine.

I determined the issue to be with the version of cloud-init. You need cloud-init-0.7.6 for config-drive to work. I got a copy of the cloud-init rpm that Red Hat uses and used virt-customize to update my CentOS image and things worked after that.

$ virt-customize -a CentOS-7-x86_64-GenericCloud.qcow2 --install

With this image the novajoin service can push a cloud-init script that will enroll the instance into IPA.

nova metadata REST API

The nova service includes a metadata server where information about an instance is made available to that instance (for use during cloud-init, for example). This includes common things like the hostname, root password, ssh keypairs, etc.

A relatively new feature in Newton adds dynamic providers. When a request is made for metadata nova will contact the configured providers using a REST API and include the returned values in the metadata.

To enable dynamic metadata, add “DynamicJSON” to the vendordata_providers configuration option. This can also include “StaticJSON”

The vendordata_dynamic_targets configuration option specifies the URLs to be retrieved when metadata is requested.tance.

The format for an entry in vendordata_dynamic_targets is: @

Name is a string to distinguish this dynamic metadata from other dynamic providers. This will used as the key to the metadata returned to the instance.

Where name is a short string not including the ‘@’ character, and where the
URL can include a port number if so required. An example would be::

For example: test@

This dynamic metadata is available in a new file, openstack/2016-10-06/vendor_data2.json

It can be retrieved as an URL from within an instance using:

$ curl

The output will look something like:

    "test": {
        "key1": "somedata",
        "key2": "something else",

The following is passed to the dynamic REST server when nova receives a metadata request:

project-idThe UUID of the project that owns this instance.
instance-idThe UUID of this instance.
image-idThe UUID of the image used to boot this instance.
user-dataAs specified by the user at boot time.
hostnameThe hostname of the instance.
metadataAs specified by the user at boot time (aka properties)

Openstack services obtaining tokens

I had a difficult time finding information on how a service might obtain a token to talk to other services. There is certainly a lot of code to pull from but the calls are quite distributed. I got most of this from the ceilometer project.

The basic idea is that you register the keystoneauth1 options into your configuration, then instantiate it, then you can make calls to services.

I’m using a global session in this code. You can just as easily drop that and generate a new session every time you need to talk to something.

This code will obtain a token using the information from the [service_credentials] section in /etc/test/test.conf and make a request to nova and print the results.

from keystoneauth1 import loading as ks_loading
from oslo_config import cfg
from novaclient import client as nova_client


CFG_GROUP = "service_credentials"

_AUTH = None

def get_session():
    global _SESSION
    global _AUTH

    if not _AUTH:
        auth = ks_loading.load_auth_from_conf_options(cfg.CONF, 'service_credentials')

    if not _SESSION:
        _SESSION = ks_loading.load_session_from_conf_options(
            CONF, CFG_GROUP, auth=auth)

    return _SESSION

def novaclient():
    session = get_session()
    return nova_client.Client('2.1', session=session)

def register_keystoneauth_opts(conf):
    ks_loading.register_auth_conf_options(conf, CFG_GROUP)
        conf, CFG_GROUP,
        deprecated_opts={'cacert': [
            cfg.DeprecatedOpt('os-cacert', group=CFG_GROUP),
            cfg.DeprecatedOpt('os-cacert', group="DEFAULT")]

CONF([], project='test')

x = novaclient()
print x.flavors.list()

Create a configuration file

# mkdir /etc/test

Add this to /etc/test/test.conf (note that I’m re-using the nova credentials for this demo):


This was done using a Newton pre-release installation I’m developing on.

novajoin microservice integration

novajoin is a project for Openstack and IPA integration. It is a service that will allow instances created in nova to be added to IPA and a host OTP generated automatically. This OTP will then be passed into the instance to be used for enrollment during the cloud-init stage.

The end result is that a new instance will seamlessly be enrolled as an IPA client upon first boot.

Additionally, a class can be associated with an instance using Glance metadata so that IPA automember rules will automatically assign this new host to the appropriate hostgroups. Once that is done you can setup HBAC and sudo rules to grant the appropriate permissons/capabilities for all hosts in that group.

In short it can simplify administration significantly.

In the current iteration, novajoin consists of two pieces: a REST microservice and an AMQP notification listener.

The REST microservice is used to return dynamically generated metadata that will be added to the information that describes a given nova instance. This metadata is available at first boot and this is how novajoin injects the OTP into the instance for use with ipa-client-install. The framework for this change is being implemented in nova in this review: .

The REST server just handles theĀ  metadata, cloud-init does the rest. A cloud-init script is provided which glues the two together. It installs the needed packages, retrieves the metadata, then calls ipa-client-install with the requisite options.

The other server is an AMQP listener that will identify when an IPA-enrolled instance is deleted and remove host from IPA . It may eventually handle floating IP changes as well, automatically updating IPA DNS entries. The issue here is knowing what hostname to assign to the floating IP.

Glance images can have metadata as well which describes the image, such as OS distribution and version. If these have been set then novajoin will include this in the IPA entry it creates.

The basic flow looks something like this:

  1. Boot instance in nova. Add IPA metadata, specifying ipa_enroll True and optionally ipa_hostclass
  2. Instance boots. During cloud-init it will retrieve metadata
  3. During metadata retrieval ipa host-add is executed, adding the host to IPA and generating an OTP and any image metadata available.
  4. OTP and FQDN is returned in the metadata
  5. Our cloud-init script is called to install the IPA client packages and retrieve the OTP and FQDN
  6. Call ipa-client-install –hostname FQDN –password OTP

This leaves us with an IPA-enrolled client which can have permissions granted via HBAC and sudo rules (like who is allowed to log into this instance, what sudo commands are allowed, etc).

Nova join (take 2)

Rich Megginson started a project in the Openstack Nova service to enable automatic IPA enrollment when an instance is created. I extended this to add support for metadata and pushed it into github as novajoin,

This used a hooks function within nova to allow one to extend certain functions (like add, delete, networking) etc. Unfortunately this was not well documented, nor apparently well-used, and the nova team wasn’t too keen on allowing full access to all nova internals, so they killed it.

The successor is an extension of the metadata plugin system, vendordata:

The idea is to allow one to inject custom metadata dynamically over a REST call.

IPA will provide a vendordata REST service that will create a host on demand and return the OTP for that host in the metadata. Enrollment will continue to happen via a cloud-init script which fetches the metadata to get the OTP.

A separate service will listen on notifications to capture host delete events.

I’m still working on networking as there isn’t a clear line which IP should be associated with a given hostname, and when. In other words, there is still a lot of handwaving going on.

I haven’t yet pushed the new source yet but I’m going to use the same project after I tag the current bits. There is no point continuing development of the hooks-based approach since nova will kill it after the Newton release.

Notifications in devstack

I still haven’t found any good documentation on setting up notifications. Most of the blog entries I’ve found are quite dated and don’t seem to apply to Mitaka-era installs. This is what I came up with (I think) in a devstack environment in /etc/nova/nova.conf:

notification_driver = messagingv2
notification_topics = notifications
notify_on_state_change = vm_state
notify_on_any_change = True

I’m still toying with this but wanted to put something down before I do something dumb like re-install devstack, like I did the last time I had it working and wiped out my configuration.

This is the python I’m using for now:

import json
from oslo_config import cfg
import oslo_messaging

class NotificationEndpoint(object):
    def info(self, ctxt, publisher_id, event_type, payload, metadata):
        print 'notification:'
        print json.dumps(payload, indent=4)
        print publisher_id
        print event_type
        print metadata

transport = oslo_messaging.get_transport(cfg.CONF)
targets = [ oslo_messaging.Target(topic='notifications') ]
endpoints = [ NotificationEndpoint() ]
server = oslo_messaging.get_notification_listener(transport, targets, endpoints)
print "Starting"
#print "Waiting"

Future thoughts on host groups, Foreman, OpenStack and IPA

Get ready for a ramble…

IPA has hostgroups. Foreman has hostgroups. Openstack-Foreman-Installer (aka astapor) has hostgroups. Wouldn’t it be great to somehow link them together into one cohesive package?

Foreman already has some integration via its realm smartproxy. When provisioning a host you can set the class of this host which, via the magic of automember in IPA, will add it to the appropriate hostgroup. But this is really separate from anything happening with Foreman.

Foreman has a host group concept which defines the list of puppet modules and other environment for a group of hosts.

Might there be a way to combine the two, so that hosts could have consistent naming, be associated with proper IPA hostgroups? If so then some more interesting policies could be applied, including:

  • Unified HBAC policies on the hosts to control access
  • The ability to have ipa-getkeytab re-fetch a keytab to maintain naming consistency for load-balancing.
  • Once IPA has support for multiple certificate profiles, providing hostgroup-specific profiles for certain types of service hosts within OpenStack