Running selenium with chromedriver together with client certificate and OAuth2, in headful mode with .NET Core 5.0 or 3.1

Sebastian Gedda
4 min readFeb 21, 2021

I wanted to schedule API-calls to an API that required an installed client certificate to access it. The problem was that the API did not allow me to access it just by providing the client certificate and password through code, I tried without success in C# with the X509Certificate class. I believe that the reason why I couldnt make it work was that the site that required the certificate is doing something more behind the scenes, in this case it seems like cookies is being set by the server before it is being redirected back to the allowed requested url with the resulting client code needed to access the api. This client code can only be used for an hour, then we need to do the same type of request again to receive a new code. After some struggling with the C# solution I decided to go with another approach which ended up to be the final working solution described below with Selenium, which means we will simulate the requests needed with help of a real web-browser (chrome) running.

The API is using oauth2 as authentication. That means that we need to have an authorized client_id provided and in this case we also need a client certificate installed on the machine that is being used to secure the API. The certificate and client-id is being provided by the owner of the API. These services has to be called through whitelisted url:s that we have provided to the owner of the API. To be able to call the services we can install our certificate locally and request the api-token from our local machine. The downside though is that we have to refresh the webpage from the computer where we installed the certificate each time the access-token expire. Which means we have a dependecy to our local computer and a manual process to call the website-address where our oauth-token receival and API calls are done.

Normally I would stick to headless mode when running selenium on the server and only use headful when locally developing/debugging. But in this case I needed to run with a client certificate installed on the server and automagically choose/allow an installed certificate, which did not work in headless mode.

Photo by Robert Anasch on Unsplash

So the first tricky part for me as an inexperienced linux-user was to add the local certificate to my linux-environment in docker. In my case the certificate was a .pfx-certificate which means it contains a private key that needs provided when installed on the server. In the end it is a few short command that needs to be added to your Dockerfile entrypoint script.

First we create a new directory for our certificate database and in the second step we initialize the database (with certutil) in that same directory without any password set. Then we add our certificate to our just initialized certificate-database. The last step is to start Xvfb, a virtual server to act as a virtual monitor for running headful selenium in our linux environment.

In order to get this running in our linux-environemnt we need the correct package to execute certutil, pk12util commands that requireslibnss3-tools installed and xvfb for xvbf command.

apt-get install 
libnss3-tools \
xvfb

Then we need this certficate to be allowed without the prompt window from chrome that asks us which certificate to use for this web. To do this we need to add a file called auto_select_certificate.json in our project. This file should be copied to this specific location on your linux environment (see further down for usage in Dockerfile):

$HOME/etc/opt/chrome/policies/managed/auto_select_certificate.json

File content of auto_select_certificate.json which basically tells chrome to allow all client-certificates automatically:

In this example I am going to use the recently released .NET 5.0 base image in my docker-setup below.

There are some important stuff in this Dockerfile that makes this work without running in headless mode. First we need to install a package called xvfb which installs a virtual display to be able to run in headful mode. When running in headless mode I ended up getting the error: ERR_BAD_SSL_CLIENT_AUTH_CERT. And it didn’t help to use ChromeOptions with — ingore-certificate-errors that seems to be the normal accepted solution.

With xvfb installed we also set these enviroment-variables DISPLAY=:99,
DBUS_SESSION_BUS_ADDRESS=/dev/null where DISPLAY tells what virtual display port to use which we earlier used in our initial bash script.

Then I think we are ready to look at our that runs the chrome browser in our linux environment with our installed certificate to recieve our code that we need to call an protected api in my case.

The above code is what I ended up with after many trial and errors. Probably some of these parameters in the chromeoptions is not necessary. The try catch was used to give me a screenshot from the browser to track what was going on in the selenium browser on the server before I got the final solution up and running.

The working solution up and running as a scheduled cron job with hangfire.

Above shows the solution successfully running as a cron job scheduled to run every night to get the latest data from the API.

Proof of concept of the solution can be found here:
https://github.com/sgedda/Selenium.Docker.Certficate

--

--

Sebastian Gedda
0 Followers

Full stack developer living in Stockholm involved in tech startups.