Realmd and SSSD Active Directory Authentication

Introduction to SSSD and Realmd

Starting from Red Hat 7 and CentOS 7, SSSD or ‘System Security Services Daemon’  and realmd have been introduced. SSSD’s main function is to access a remote identity and authentication resource through a common framework that provides caching and offline support to the system. SSSD provides PAM and NSS integration and a database to store local users, as well as core and extended user data retrieved from a central server. 

The main reason to transition from Winbind to SSSD is that SSSD can be used for both direct and indirect integration and allows to switch from one integration approach to another without significant migration costs. The most convenient way to configure SSSD or Winbind in order to directly integrate a Linux system with AD is to use the realmd service. Because it allows callers to configure network authentication and domain membership in a standard way. The realmd service automatically discovers information about accessible domains and realms and does not require advanced configuration to join a domain or realm.

The realmd system provides a clear and simple way to discover and join identity domains. It does not connect to the domain itself but configures underlying Linux system services, such as SSSD or Winbind, to connect to the domain.

Realmd Pam SSSD

Please read through this Windows integration guide from Red Hat if you want more information. This extensive guide contains a lot of useful information about more complex situations.

Realmd / SSSD Use Cases

How to join an Active Directory domain?

  1. First of all start you will need to install the required packages:
  2. Configure ntp to prevent time sync issues:
  3. Join the server to the domain:
  4. Also add the default domain suffix to the sssd configuration file:

    Add the following beneath [sssd]

  5. Finally move the computer object to an organizational unit in Active Directory.

How to leave an Active Directory domain?

I saw multiple times that although the computer object was created in Active Directory it was still not possible to login with an ad account. The solution was each time to remove the server from the domain and then just add it back.

How to permit only one Active Directory group to logon

As it can be very useful to only allow one Active Directory group. For example a group with Linux system administrators.

 How to give sudo permissions to an Active Directory group



Example sssd.conf Configuration

The following is an example sssd.conf configuration file. I’ve seen it happen once that somehow access_provider was set to ad. I haven’t got the chance to play with that setting, as simple worked almost every time for now.

As Sean suggests in the comments, it’s not a good idea to set krb5_store_password_if_offline to True since the passwords are stored in the keyring in plaintext. Alternatively “cache_credentials = Yes” stores passwords in the db using SHA512 hash and that may be more appropriate if this functionality is needed.


Required security permissions in AD

A few months ago, we had a problem where some users were no longer able to authenticate. After an extended search we discovered the reason was a hardening change in permissions on some ou’s in our AD. My colleague Jenne and I discovered that the Linux server computer objects need minimal permissions on the ou which contains the users that want to authenticate on your Linux servers. After testing almost all obvious permissions, we came to the conclusions that the computer objects need “Read remote access information”!


How to debug SSSD and realmd?

The logfile which contains information about successful or failed login attempts is /var/log/secure. It contains information related to authentication and authorization privileges. For example, sshd logs all the messages there, including unsuccessful login. Be sure to check that logfile if you experience problems logging in with an Active Directory user. 

How to clear the SSSD cache?

As suggested by AP in the comments, you can manage your cache with the sss_cache command.  It can be used to clear the cache and update all records:

The sss_cache command can also clear all cached entries for a particular domain:
If the administrator knows that a specific record (user, group, or netgroup) has been updated, then sss_cachecan purge the records for that specific account and leave the rest of the cache intact:

Please refer to the official documentation for more information.

In case the above doesn’t help, you can also remove the cache ‘hte hard way’:

Just wanted to add this command which also helped me in one case somehow. 

Final Words

I hope this guide helps people towards a better Windows Linux integration. Let me know if you think there is a better way to do the above or if you have some useful information you think I should add to this guide.



Real-time Eventlog Monitoring with Nagios and NSClient++

Introduction to real-time eventlog monitoring

NSClient++ has a very powerful component that enables you to achieve real-time eventlog monitoring on Windows systems. This feature requires passive monitoring of Windows eventlogs via NSCA or NRDP.

The biggest benefits of real-time eventlog monitoring are:

  • It can help you find problems faster (real-time), as NSClient++ will send the events with NSCA the moment it occurs.
  • It is much more resource efficient then using active checks for monitoring eventlogs. It actually requires fewer resources on both the Nagios server, as on the client where NSClient is running!
  • There is no need to search through every application’s documentation, as you can just catch all the errors and filter them out if not needed.

The biggest drawbacks of real-time eventlog monitoring are:

  • As it are passive services, new events will overwrite the previous event, which could cause you to miss a problem on your Nagios dashboards. 
  • You need  a dedicated database table to store the real-time eventlog exclusions. 
  • You will need some basic scripting skills to automate building the real-time eventlog exclusion string in the NSClient configuration file.

General requirements for using real-time eventlog monitoring

NSCA Configuration of your NSClient++

As NSClient++’s real-time eventlog monitoring component will send the events passively to you Nagios server, you will need to setup NSCA. Please read through this documentation for configuring NSCA in NSClient++.

NSCA Configuration of your Nagios server

NSCA also requires some configuration on your Nagios server. Please read through this documentation for configuring NSCA in Nagios Core or this documentation for configuring NSCA in Nagios XI.

Passive services for each Windows host on your Nagios server

Each Windows host needs at least one passive service, which is able to accept the filtered Windows eventlogs. You can make as much of them as you require. I choose to use one for all application eventlog errors and one for all system eventlog errors:

Real-Time Eventlog Monitoring Passive Services

A database to store your real-time eventlog exclusions

If you want to generate a real-time eventlog exclusion filter, you need to somehow store a combination of hostnames, event id’s and event sources. We are using MSSQL at the moment and generate the exclusions with Powershell. This database needs at least a servername, eventlog, eventid, eventsource and comment column. The combination of those allow you to make an exclusion for almost any type of Windows event.

Real-time Eventlog Monitoring Exclusion Database

Some sort of automation software which can be called with a Nagios XI quick action

Thanks to Nagios XI quick actions, you can quickly exclude noisy events by updating the NSClient++ configuration file with the correct filter. With the correct customization and scripts, this allows you to create a self-learning system. For this to work, you basically need one script which will store a new real-time eventlog exclusion in a database and another which generates the NSClient++ configuration file with the latest combination of real-time eventlog exclusions. We are using Rundeck, a free and open source automation tool to execute the above jobs.

Detailed NSClient ++ configuration

Minimal nsclient.ini ‘modules’ settings:

Minimal nsclient.ini ‘NSCA’ settings:

The above configuration doesn’t use any encryption. Once your tests work out, I advise you to configure some sort of encryption to prevent hackers from sniffing your NSCA packets. Please note that at this moment (31/05/17) the official Nagios NSCA project does not support aes, only Rijndael. This GitHub issue has been created to fix this problem. You’ll have to use one of the other less strong encryption methods at the moment.

Example nsclient.ini ‘eventlog’ settings:

This is an example configuration for getting real-time eventlog monitoring to work. Please note that this has been tested on NSClient++ I’m not 100 % sure it works on earlier versions.

The above configuration template is just an example. As you can see it contains a DUMMYAPPLICATIONFILTER and a DUMMYSYSTEMFILTER. You can easily replace these with the generated exclusion filter. A few examples of how such a filter might look:

(id NOT IN (1,3,10,12,13,23,26,33,37,38,58,67,101,103,104,107,108,110,112,274,502,511,1000,1002,1004,1005,1009,1010,1026,1027,1053,1054,1085,1101,1107,1116,1301,1325,1334,1373,1500,1502,1504,1508,1511,1515,1521,1533)) AND (id NOT IN (1509) OR source NOT IN ('Userenv')) AND (id NOT IN (1055) OR source NOT IN ('Userenv')) AND (id NOT IN (1030) OR source NOT IN ('Userenv')) AND (id NOT IN (1006) OR source NOT IN ('Userenv')) 


(id NOT IN (1,3,4,5,8,9,10,11,12,15,19,27,37,39,50,54,56,137,1030,1041,1060,1066,1069,1071,1111,1196,3621,4192,4224,4243,4307,5722,5723)) AND (id NOT IN (36888) OR source NOT IN ('Schannel')) AND (id NOT IN (36887) OR source NOT IN ('Schannel')) AND (id NOT IN (36874) OR source NOT IN ('Schannel')) AND (id NOT IN (36870) OR source NOT IN ('Schannel')) AND (id NOT IN (12292) OR source NOT IN ('VSS')) AND (id NOT IN (7030) OR source NOT IN ('ServiceControlManager')) 

Only errors which are not filtered by the real-time eventlog filters such as the examples above will be sent to your Nagios passive services.

Multiple NSCA Targets

This is an nsclient.ini config file where two NSCA targets are defined. This can be useful in scenarios where a backup Nagios server needs to be identical as the primary Nagios server:

How to generate errors in your Windows eventlogs?

In order to test, you will need a way to debug and hence a way to generate errors with specific sources or id’s. You can do this very easily with Powershell:

If you get an error saying that the source passed with the above command does not exist, you can create it like this:

Or another way:

(Almost) Final Words

As I can hear some people think “why don’t you post the code to generate the real-time eventlog exclusion filter?”. Well, the answer is simple, I don’t have the time to clean up all the code, so it doesn’t contain any sensitive information. But as a special gift for all my blog readers who got to the end of this post, I’ll post a snippet of the exclusion generating Powershell code here. The rest you will have to make your self for now.

I will open the comments section for now, but please only use it for constructive information. 



SSLLabs A+ Rating for Let’s Encrypt on CentOS 7


This is a small groowing blog post about obtaining an A+ rating with Apache 2.4.6 or higher on a freshly installed CentOS 7 with Let’s Encrypt certificates.  Several blog posts claim getting  an A+ rating on SSLLabs isn’t possible without HPKP (HTTP Public Key Pinning). This actually isn’t true. It’s perfectly possible to get an A+ rating by just enabling HSTS and OCSP Stapling, which are both easy to implement (compared to HPKP). 



Contrary to what some blogs are posting, you also don’t need to use a 4096 bit key. This will definitely save you some resources, which is proved in this blog post from CertSimple:

4096 bit handshakes are indeed significantly slower in terms of CPU usage than 2048 bit handshakes.

HTTP Public key pinning (HPKP) is a security mechanism which only protects against a relatively rare MitM attack that’s very hard to pull off. Someone would have to impersonate you with a fraudulent certificate generated via a Certificate Authority that your browser already trusts.

But if misconfigured, it can really brick your website.  And it is not supported by Internet Exploter and Edge as you can see here in the Microsoft Edge Platform Status. HPKP is alsonot recommend with Let’s Encrypt and has a lot of additional requirements. This is explained by Peter Eckersley from the EFF in this blog post.

Let’s Encrypt

These days, encryption is a necessary part of any self-respecting IT organization. If your traffic is unencrypted, you (or your visitors) are almost certainly being monitored by someone in some government somewhere on our little planet. If not by the so-called “Five Eyes” (Australia – Canada – New Zealand – United Kingdom – United States), then some other government organization owned by the Russians, the Chinese or an other less known security agency or hacker group might have you on their radar. Luckily since 3 December 2015 Let’s Encrypt entered public beta. (not that this means you are suddenly safe 😀 )

The Law

If you want information about cryptography laws in your country, please consult It has a very detailed list of existing and proposed laws and regulations on cryptography for almost any country.

Owned by?

Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit. Let’s Encrypt is a service provided by the Internet Security Research Group (ISRG). ISRG is a California public benefit corporation. Major sponsors are the Electronic Frontier Foundation (EFF), the Mozilla Foundation, Akamai and Cisco Systems. Other partners include the certificate authority IdenTrust, the University of Michigan, the Stanford Law School and the Linux Foundation. Not the smallest organizations aren’t they?


The key principles behind Let’s Encrypt are:

  • Free: Anyone who owns a domain name can use Let’s Encrypt to obtain a trusted certificate at zero cost.
  • Automatic: Software running on a web server can interact with Let’s Encrypt to painlessly obtain a certificate, securely configure it for use, and automatically take care of renewal.
  • Secure: Let’s Encrypt will serve as a platform for advancing TLS security best practices, both on the CA side and by helping site operators properly secure their servers.
  • Transparent: All certificates issued or revoked will be publicly recorded and available for anyone to inspect.
  • Open: The automatic issuance and renewal protocol will be published as an open standard that others can adopt.
  • Cooperative: Much like the underlying Internet protocols themselves, Let’s Encrypt is a joint effort to benefit the community, beyond the control of any one organization.

Do you need any more convincing? For years people have been paying far too much for their SSL certificates to security companies such as Comodo, GlobalSign, Godaddy, Thawte and others. I have never really understood why they cost so much. And why would we trust them? Remember DigiNotar?

DigiNotar was a Dutch certificate authority owned by VASCO Data Security International. On September 3, 2011, after it had become clear that a security breach had resulted in the fraudulent issuing of certificates, the Dutch government took over operational management of DigiNotar’s systems. That same month, the company was declared bankrupt. After more than 500 fake DigiNotar certificates were found, major web browser makers reacted by blacklisting all DigiNotar certificates. The scale of the incident was used by some organizations like ENISA to call for a deeper reform of HTTPS in order to remove the weakest link possibility that a single compromised CA can affect that many users.


An initiative such as Let’s Encrypt, carried and supported by a huge number of people and organizations is in my humble opinion a much safer option then trusting your certificates in the hand of ‘relatively small companies’. All those so called security companies claim to be secure, but in the meantime fail to deliver transparent and open protocols. Your security might be in compromised  by just one overworked individual failing to do his job properly.

But what can other Certificate Authorities offer that Let’s Encrypt can’t?

There are three types of SSL certificates: Domain Validated (DV), Organization Validated (OV) and Extended Validation (EV). To get a DV cert you only need prove that you control the domain for which the certificate is assigned. For an OV cert the CA checks with third parties to ensure that the name of the applying organization is the same as that which owns the domain. For an EV cert, the kind that turn your browser address bar green, you need to provide much more extensive documentation, and there are no personal EV certs. The very fact that the Let’s Encrypt process is automated means that they will not be able to offer anything other than DV certificates. To many companies this isn’t enough. You should use Let’s Encrypt:

  • If you are running your own web server
  • If you have a registered publically accessible domain name

You should not use Let’s Encrypt with:

  • Shared web hosting
  •  You want to keep the existence of your certificate a secret
  • Wildcard certificates
  • Long-lived certificates
  • Extended Validation
  • If you want your certificate to be trusted by older software

SSLLabs Rating Methodology

SSLLab’s approach for calculating the rating consists of four steps.

Certificate Inspection

They look at the certificate to verify that it is valid and trusted. Server certificates are often the weakest point of an SSL server configuration. Certificates that aren’t trusted fail to prevent MITM attacks.

Any of the following certificate issues immediately result in a zero score:

  • Domain name mismatch
  • Certificate not yet valid
  • Certificate expired
  • Self-signed certificate
  • Use of a certificate that is not trusted (unknown CA or some other validation error)
  • Revoked certificate
  • Insecure certificate signature (MD2 or MD5)
  • Insecure key

So how are the Let’s Encrypt certificates doing? Let’s Encrypt’s intermediate is signed by ISRG Root X1. However, since they are a very new certificate authority, ISRG Root X1 is not yet trusted in most browsers. In order to be broadly trusted right away, their intermediate is also cross-signed by another certificate authority, IdenTrust, whose root is already trusted in all major browsers. Specifically, IdenTrust has cross-signed our intermediate using their DST Root CA X3.


They inspect the server configuration in three categories. The category scores are combined into an overall score expressed as a number between 0 and 100. A zero in any category will push the overall score to zero. 
They then apply a series of rules to handle some aspects of server configuration that cannot be expressed via numerical scoring. Most rules will reduce the grade (to A-, B, C, D, E, or F) if they encounter an unwanted feature. Some rules will increase the grade (to A+), to reward exceptional configurations.

Protocol support

SSLLab will look  at the protocols supported by an SSL server. For example, both SSL 2.0 and SSL 3.0 have known weaknesses. Because a server can support several protocols, they use the following algorithm to arrive to the final score:

  1. Start with the score of the best protocol
  2. Add the score of the worst protocol
  3. Divide the total by 2
SSL 2.00%
SSL 3.080%
TLS 1.090%
TLS 1.195%
TLS 1.2100%

Key exchange support

The key exchange phase serves two functions. The first phase is done with asymetric encryption. It performs authentication, allowing at least one party to verify the identity of the other party. The other is to ensure the safe generation and exchange of the secret keys that will be used during the remainder of the session. The weaknesses in the key exchange phase affect the session in two ways:

  • Key exchange without authentication allows an active attacker to perform a MITM attack, gaining access to the complete communication channel.
  • Most servers also rely on public cryptography for the key exchange. Thus. the stronger the server’s private key, the more difficult it is to break the key exchange phase.
Key exchange aspect Score
Weak key (Debian OpenSSL flaw) 0%
Anonymous key exchange (no authentication) 0%
Key or DH parameter strength < 512 bits 20%
Exportable key exchange (limited to 512 bits) 40%
Key or DH parameter strength < 1024 bits (e.g., 512)40%
Key or DH parameter strength < 2048 bits (e.g., 1024)80%
Key or DH parameter strength < 4096 bits (e.g., 2048) 990%
Key or DH parameter strength >= 4096 bits (e.g., 4096)100%

Cipher support

To break a communication session, an attacker can attempt to break the symmetric cipher used for the bulk of the communication. A stronger cipher allows for stronger encryption and thus increases the effort needed to break it. Because a server can support 7 ciphers of varying strengths, SSLLab penalizes the use of weak ciphers.

Cipher strength Score
0 bits (no encryption) 0%
< 128 bits (e.g., 40, 56) 20%
< 256 bits (e.g., 128, 168) 80%
>= 256 bits (e.g., 256) 100%

You can find the SSL server rating guide here.

Requirements for getting an A+ rating on SSLLabs

Use any of this code at your own risk. Do no use it on a production webserver. Do not use it if you don’t know what you are doing. 

Please replace any variables such as $Hostname and $Email with values.


Create SSL Configuration

Let’s Encrypt

I’m assuming you have ran CertBot to generate your Let’s Encrypt certificates. If not, start with installing the python-certbot-apache yum package:

Then run


Make sure httpd and mod_ssl yum packages are installed on the server.

You can check your Apache version like this:

Enable the httpd service:

Create the webroot:

Set webroot owner:

Furthermore set the webroot permissions:

Create demo page:

Create vhost directory /etc/httpd/sites-available

Equally create vhost directory /etc/httpd/sites-enabled

Add sites-enabled/*.conf to httpd.conf:

Create Apache vhost:

Link Apache vhost:

Restart Apache:

In addition add the firewalld https service:

Finally restart firewalld:

Final Words

First of all, let me know if I forgot something. As much as I tried to make the article as accurate as possible, I might have made a mistake. Qualys is also continuously improving their tests and rating methods. As a result some parts of this article might no longer be valid.

Furthermore, please note that an A+ rating is probably not ideally for every web application. It’s not because you have received and A- or A rating that your web application is suddenly vulnerable for any malicious attack. Different websites have different needs, which means that there is not a ‘perfect’ configuration that works for everyone. 

I would like to suggest this GitHub project from SSLLabs which contains some very useful recommendations about SSL and TLS deployments.

Please let me know if I something I wrote is incorrect.

Monitor RaspBerry Pi with Nagios


Over the past week, I had multiple questions how to monitor RaspBerry Pi with Nagios. Monitoring is crucial to pro-actively  find out any issues that might come up. There are multiple ways to achieve this. I’ll try to build up this ‘how to’ from the ground, starting with using the standard traditional method, which is using the official Nagios NRPE Agent.

NSClient++ does not yet support Raspbian for now. Michael Medin told me in this forum thread that he is planning to port it once he finds some spare time.

It’s also possible to install Go and Telegraf on your Raspbian, but I haven’t got the time to test that. 

How to Monitor RaspBerry Pi with NRPE Agent?

The code below worked fine for me on Raspbian Jessie

Create nrpe.cfg in /usr/local/nagios/etc

The relevant part of my nrpe.cfg looks like this:

make sure to replace <ip-of-your-Nagios-server-here> with (you never guess) the ip of your Nagios server.

Let me know if you experience any issues.



How to Debug Perl Scripts with EPIC Eclipse


As about a year ago I took over development of John Murphy’s NetApp Ontap Cluster monitoring plugin, I was in need of some way to debug Perl scripts in Windows. After some online research, it seemed Eclipse with the EPIC plugin was the way to go. One small notice is that Eclipse is built with Java and hence consumes quite a bit of RAM. I wouldn’t recommend using it with less then 4 GB of RAM.

Debug Perl with EPIC Eclipse


To make it easier for people to debug Perl scripts from a Wndows client, I’ll list the steps here how to get things running smoothly.

How to debug Perl scripts?

  1. Download and install Eclipse.
  2. Download and install ActivePerl
  3. Verify installation by opening cmd.exe and type perl -v
  4. Open Eclipse, go to Help menu and select Eclipse Marketplace. Search for EPIC or Eclipse Perl Integration and install the EPIC components. You can find more info about EPIC on their website.
  5. In order to see local variables, PadWalker needs to be installed. Before you can use the Perl Package Manager, you client needs a reboot (after installation of ActivePerl).  Open command windows (cmd.exe) and type ‘ppm install PadWalker’, which would result in something similar like this output:
  6. Next thing would be to show line numbers, as looking for line 1085 in a 2000 line script is quite hard without it. Go to the Window menu, and choose Preferences.  Next, choose Perl EPIC in the left column and enable the checkbox left of “Show line numbers”.

Enjoy your debug Perl hunt. I hope it can help you finding and solving issues in the check_netapp_ontap script.

Let me know if there are better free Perl debugging methods on Windows in a comment!