How do I promote an IPA replica to a master?

The short answer is: you don’t, it’s already a master!

All IPA servers are masters, and equals. Some are just more equal than others. The distinguishing factors are: which was the first master installed and does this master have a CA?

In any IPA installation you absolutely want > 1 masters running a CA so you don’t have a single point of failure. When installing a new master this is not done automatically. You need to add the --setup-ca flag, or run ipa-ca-install post-install.

The first IPA master installed is distinguished by two tasks it is responsible for: generating the CRL and renewing the CA subsystem certificates. See the IPA wiki for details on how to switch the master responsible for these at

NSS and forking

I was reminded recently that if you are going to use NSS as your SSL library then you need to be sure to get all the forking out of the way before you call NSS_Init*. The NSS PKCS#11 loader, including the built-in softokn, has fork detection such that it refused to load if a fork is detected.

I had to bend over backwards to work around this in mod_nss a few years ago.

SubjectAltNames and NSS

NSS is very strict when it comes to validating a certificate hostname. Per RFC 2818 if there is a subjectAltName defined then ONLY the subjectAltName is used to validate the certificate. Based only a little bit of testing, it appears that OpenSSL is a bit more lenient because a certificate that returns an error in NSS worked with s_client.

The error you’d get out of NSS via curl is:

Unable to communicate securely with peer: requested domain name does not match the server's certificate.

So, if you have host and you want to create a certificate to serve that and you’ll need to include two subjectAltName DNSNames in the CSR, one for each host.

With certmonger it might look something like this:

# ipa-getcert request -d /etc/httpd/alias -n Server-Cert -p /etc/httpd/alias/pwdfile.txt -N "" -D "" -K HTTP/ -D ""

Note that this won’t actually work well with IPA by default. You’d also need to tweak ipa-rewrite.conf so the request doesn’t result in a 301 redirect.

FreeIPA 3.3.5 and sudo

The last time I had any real reason to play with sudo and IPA was before sssd got sudo support. I found the previous sudo-ldap debugging quite good, even if sudo itself was rather slow due to lack of caching.

A lot of users seem to have problems getting it setup since older IPA clients will not do this automatically so I thought I’d give it a go. I’m doing this on an up-to-date Fedora 20 system using the following IPA and SSSD:


I started with the sssd-sudo(8) man page which laid out quite clearly the changes I needed to make to /etc/nsswitch.conf and sssd.conf. I restarted sssd and found my user couldn’t sudo at all, which makes sense since I hadn’t added any rules yet.

Ok, so I add a single rule to run any command on any host for a new group I added, sudoers of which my test user is a member. Oh, and be sure that the user is a member of the group before logging in so the groups evaluate properly.

I created the group and sudo rule with:

[admin@ipaserver]$ ipa group-add sudoers
[admin@ipaserver]$ ipa group-add-member sudoers --users=tuser1
[admin@ipaserver]$ ipa sudorule-add --hostcat=all --cmdcat=all sudoers
[admin@ipaserver]$ ipa sudorule-add-user --group=sudoers sudoers

I should also note I still have the HBAC allow_all rule enabled. If you’ve disabled this then you’ll need to grant sudo rights to the users you want to be able to execute it.

Before starting real testing, I created /etc/sudo.conf with these contents:

Debug sudo /var/log/sudo.log all@debug

This gives me a quite verbose log of what is going on. It probably makes more sense to a sudo developer but I can more or less follow along with the number of rules being evaluated, etc.

To double-check that the rule exists we can look in it in LDAP as the IPA admin user:

[admin@ipaserver]$ kinit admin
[admin@ipaserver]$ ldapsearch -LLL -Y GSSAPI -b ou=SUDOers,dc=example,dc=com
SASL/GSSAPI authentication started
SASL username: admin@EXAMPLE.COM
SASL data security layer installed.
dn: ou=sudoers,dc=example,dc=com
objectClass: extensibleObject
ou: sudoers

dn: cn=sudoers,ou=sudoers,dc=example,dc=com
objectClass: sudoRole
objectClass: top
sudoUser: %sudoers
sudoHost: ALL
sudoCommand: ALL
cn: sudoers

Ok, so now I just need to ssh into this box and try sudo -l

[admin@ipaserver]$ ssh tuser1@ipaserver
[tuser1@ipaserver]$ sudo -l
[sudo] password for tuser1: 
User tuser1 may run the following commands on ipaserver:
    (root) ALL

I also want to avoid authentication so I can update the rule to not require it:

[admin@ipaserver]$ ipa sudorule-add-option sudoers --sudooption='!authenticate'

Remember that the rules are cached so changes may not be available immediately, but it worked for me:

[tuser1@ipaserver]$ sudo -l
User tuser1 may run the following commands on ipaserver:
    (root) ALL

A somewhat old, but good, document to read is In particular it has good information how the caching works.

FreeIPA and no DNA range

Ok, so let’s say you have an initial IPA master and one more more additional masters (aka replicas). You’ve always done all administration on the first one and it is now temporarily or permanently gone, but it’s gone, and you really need to add that new CEO’s unix account.

If you try to add a new user you might  get a nasty error like this:

ipa: ERROR: Operations error: Allocation of a new value for range cn=posix ids,cn=distributed numeric assignment plugin,cn=plugins,cn=config failed! Unable to proceed.

When a master is created it isn’t automatically assigned a DNA range for POSIX IDs. A range is requested from the master it was created from when a range is needed. It gets half the remaining range on the master it talks to.

This means that the current master can’t contact another one to get a DNA range, so you can’t add any new users.

You can find the master it is trying to talk to here:

$ ldapsearch -x -D 'cn=Directory Manager' -W -b cn=posix-ids,cn=dna,cn=ipa,cn=etc,dc=example,dc=com

In all likelihood it is pointing to the master that is down.

So how do you fix it?

If you have another master with a DNA range assigned then you can change the value of dnaHostname in the above entry to point to that master. The downside is that you run the risk of losing a huge chunk of unused IDs.

How do I do it without losing a ton of values? That’s quite a loaded question as it depends greatly on your environment. What you want to avoid, at almost all costs is to end up with an overlapping DNA configuration such that two masters are issuing UIDs from the same namespace, or to configure it such that it is re-assigning values.

You can find the initial namespace with:

$ ipa idrange-find

1 range matched
Range name: EXAMPLE.COM_id_range
First Posix ID of the range: 1689600000
Number of IDs in the range: 200000
Range type: local domain range
Number of entries returned 1

Or by looking at /var/log/ipaserver-install.log on the initial master.

DNA would have tried to give you half its remaining range if the master had been up so for safety you could try that, assuming it doesn’t overlap any other masters. You’ll need to check their DNA configurations to be sure.

If you are running IPA 3.3+ then ipa-replica-manage can help you configure DNA properly. See dnarange-show and dnarange-set. Don’t be confused by dnanextrange-*, that is more for preserving ranges when a master is deleted.

For now I’m doing this the manual way which will work on any version.

Run this on each master:

$ ldapsearch -x -D 'cn=Directory Manager' -W -b 'cn=Posix IDs,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config'

If the dnaNextValue is 1101 and the dnaMaxValue is 1100 then no range has yet been assigned.

WARNING: You cannot currently use the ipa idrange-add command to add a new range for POSIX uids. Through IPA 4.1 there is no connection between DNA and the ID range. The ID range shown with the idrange command is a convenience only.

Once you’re sure you have a viable range you can update the non-working master with whatever range you’ve come up with:

$ ldapmodify -x -D 'cn=Directory Manager' -W
Enter LDAP Password:
dn: cn=Posix IDs,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config
changetype: modify
replace: dnaNextValue
dnaNextValue: 1689700000
replace: dnaMaxValue
dnaMaxValue: 1689799999

modifying entry "cn=Posix IDs,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config"

Now you can add a new user successfully:

$ ipa user-add --first=tim --last=user tuser1
Added user "tuser1"
  User login: tuser1
  First name: tim
  Last name: user
  Full name: tim user
  Display name: tim user
  Initials: tu
  Home directory: /home/tuser1
  GECOS: tim user
  Login shell: /bin/sh
  Kerberos principal: tuser1@EXAMPLE.COM
  Email address:
  UID: 1689700000
  GID: 1689700000
  Password: False
  Member of groups: ipausers
  Kerberos keys available: False

You can see that the UID is the value of dnaNextValue we set.

Future thoughts on host groups, Foreman, OpenStack and IPA

Get ready for a ramble…

IPA has hostgroups. Foreman has hostgroups. Openstack-Foreman-Installer (aka astapor) has hostgroups. Wouldn’t it be great to somehow link them together into one cohesive package?

Foreman already has some integration via its realm smartproxy. When provisioning a host you can set the class of this host which, via the magic of automember in IPA, will add it to the appropriate hostgroup. But this is really separate from anything happening with Foreman.

Foreman has a host group concept which defines the list of puppet modules and other environment for a group of hosts.

Might there be a way to combine the two, so that hosts could have consistent naming, be associated with proper IPA hostgroups? If so then some more interesting policies could be applied, including:

  • Unified HBAC policies on the hosts to control access
  • The ability to have ipa-getkeytab re-fetch a keytab to maintain naming consistency for load-balancing.
  • Once IPA has support for multiple certificate profiles, providing hostgroup-specific profiles for certain types of service hosts within OpenStack

Enabling SSL or tls-proxy in devstack

If you want to create an OpenStack environment using devstack with most endpoints protected by SSL there are two ways to do it: native SSL or a TLS proxy (aka an SSL terminator). Both are supported in devstack.

To enable native SSL, add this to your local.conf


To enable via TLS Proxy (stud in this case), add this to your local.conf


This will enable SSL endpoints for:

  • keystone
  • nova
  • cinder
  • glance
  • swift
  • neutron

devstack will generate its own CA certificate and add it to the global trust so all clients on the local machine should just work(tm).

Kernel panic from Solaris 10 x86 installation on KVM

Was trying to install Solaris 10 from sol-10-u11-ga-x86-dvd.iso today and it wouldn’t boot on a Generic/Generic/1GB RAM/8GB disk x86_64 KVM VM,. It failed with a kernel panic.

It seems related to the amount of RAM because I bumped it up to 2GB and the VM booted and I’ve started the installation.

As a note to self, 8GB is not enough for a Developer install either. I went with 14GB.

Keystone and HAProxy

I’m trying to get the astapor puppet module (used in the Openstack Foreman Installer and Staypuft) to configure SSL via a proxy. I’m going to use haproxy since it may already be available on the system and it supports SSL termination.

I’m starting with Keystone, as usual, since it is the core of things. Here are some notes from my first crack at doing it manually.

I cheated a bit and used this blog entry to get the basic jist on configuring haproxy for SSL termination. I just copied the default haproxy.cfg to keystone.cfg, deleted the default listeners and added this block:

frontend main *:5000
bind ssl crt /etc/pki/tls/private/combined.pem
default_backend keystone-backend

frontend admin *:35357
bind ssl crt /etc/pki/tls/private/combined.pem
default_backend admin-backend

backend keystone-backend
redirect scheme https if !{ ssl_fc }
server keystone1 check

backend admin-backend
redirect scheme https if !{ ssl_fc }
server admin1 check

I started it with:

# haproxy -f /etc/haproxy/keystone.cfg

And of course it failed because keystone is already listening on those ports. So I left it dead for now. I switched gears and started following my previous blog post on configuring keystone for SSL. The difference is that I just need to create the new secure endpoint, then re-configure keystone.cfg to listen on ports 5001 and 35358 instead.

Note: HAproxy only takes a single option for SSL so you need to concatonate the public cert, private key and CA cert(s) into a single file and use that. When I generate these certs using certmonger I’ll probably end up using a post-save script to do this concatonation.

So I did that, deleted the original keystone endpoint, restart the openstack-keystone service and finally I was able to start up haproxy.

I then fixed my adminrc to use SSL and include OS_CACERT=/path/to/ca and then tried a keystone endpoint-list only to get an SSL failure.

The problem is in python-backports-ssl_match_hostname. The puppet manifests I’m using currently put IP addresses in for everything and I’ve no time or skill to track all that down so I figured I could cheat for a bit and use an IP Address SAN. The problem is that this is explicitly not allowed in match_hostname so the request fails. For now I added some matching code so it works:

if key == 'IP Address':
    if value == hostname:

So with that in place I can now run keystone endpoint-list successfully. I then moved onto the rest of my previous blog on manually converting to secure Keystone and was able to get nova, glance and cinder working. I’m just about ready to fire up a VM at this point.

CA verification and requests

I’ve seen several projects that use requests that try to pass in local CA information. This is fine and generally pretty functional for those that use self-signed certificates, but the fallback when no CA is provided tends to be None. This causes requests to check two environment variables: REQUESTS_CA_BUNDLE and CURL_CA_BUNDLE. If neither is set then you get no CA validation at all which basically dooms the request to failure.

Instead, IMHO, verify should be set to requests.cert.where() if no CA is provided by the client. Really this should be the default in requests.

Adding CAs to the global store is easier than ever and generally a lot easier to handle that copying PEM files all over the place and referencing long paths in potentially multiple configuration files (in the case of OpenStack).