Catching up with the CTO – AI and the R&D journey

Each month, FanFinders’ CTO Paul Gwynn will be sharing insights into how we’re using technology to transform experiences for our members and clients.

If you work in technology (or even if you don’t), you’ll know the biggest buzz in tech over the past few years has been Artificial Intelligence (AI).

A few years ago AI was the preserve of billion dollar corporations and governments, but with companies such as Google, Microsoft, Amazon and IBM packaging up their AI offerings to developers, it’s become easier for us mere mortals to integrate it into our systems.

At FanFinders, we’re no slouches when it comes to tech, we are after all primarily a tech-driven company and have a definitive roadmap for our tech future.

Known as ‘Project V3’ internally, a large part of this project is to introduce AI into our systems logic and workflows. We’re not doing this because it’s cool, the latest fad, or the CEO says ‘we need this now!’, we need to see real benefits and the only way to establish this is through R&D.

As part of this, one of things we’re set on upgrading is our email campaign manager. It’s a part of our ecosystem we’ve identified that could benefit from AI but first, we needed to prove that.

The Problem

FanFinders sends out a lot of emails over the course of a year, I’m talking hundreds of millions, if not more. It’s not unusual to have new email campaigns going out on a daily basis and building these and setting up the target parameters is a very manual intensive process.

We also have our ‘Domain Reputation’ to retain, step out of line and email providers such as Google and Microsoft will automatically direct your emails to spam. Plus, we really don’t want to be sending our members emails that are irrelevant to them.

To this end, we decided to spike a mini AI system that predicts how a user responds to an email based on their demographics, historical activity and the email content. This is something new for our business domain, namely expectant and new parents.

The Objectives

We kept our objectives high level and simple, as follows:

  1. Can AI determine correlations between a user’s demographic and their predisposition to respond to an email campaign within our specific business domain?
  2. Can we reduce send counts whilst retaining or improving click through rates?
  3. Can we reduce unsubscribes with better campaign targeting?
  4. Can we determine the main steps in building, testing and implementing an AI model for this specific purpose?

The Approach

We also kept our approach as simple as we could:

  1. Build multiple AI Models using different parameters
  2. Test those models against existing campaigns
  3. Select the best model and an established campaign and do a real world test

In order to expedite the building and testing of multiple models, we did need to build a small but specialist jig that automated data import, data transforms, model training, testing and predictions.

Although there was a time and resource cost to this, it more than paid for itself over the course of the project.

Prediction Measure

From our model, we wanted to feed in our parameters and get out a prediction of confidence that the recipient would engage with the email.

The values used were on a scale of 1 to 100 where 1 is an ‘extremely small chance of engagement’ and 100 is ‘highly likely to engage’. It’s important to note that a score of 100 does not guarantee the recipient will engage, only that the chances of them engaging are pretty good.

The Result

The campaign in question has a regular number of contacts (although those contacts differ between each send).

From crunching our test numbers we could see that a prediction level of 50 is where benefits become really noticeable, increasing our click through rate by 25% but decreasing our total sends by 600% (yeah, we’d definitely take that!), but how did the real world send perform against our prediction?

Our real world result was to be quite frank, unexpected. To say it was a test/prototype model and the engineering that went into it reflected that, the real world test produced the following:

Decrease in total sends was 600% (as predicted) and increase in click through rate was a mighty 400%. That’s 6 times less sends with 4 times the engagement.

So how did our unsubs do? We normally get around 100 unsubs per campaign sent, on our real world AI send we had zero recipients unsubscribe (yes that’s correct, zero, nada, none).

Taking the above result and overlaying it against our objectives, it’s plain to see that the answer to objectives 1,2 and 3 is a resounding ‘Yes’.

With respect to objective 4, the learnings from building the prototype helped us establish what we should and shouldn’t be doing at a technical level when we do the final live implementation.


It doesn’t take a genius to work out there’s a benefit here and, although more real world tests will need to take place, this is one AI implementation that gets a green light.

As we roll this out into a finished product, I plan to update you on a regular basis on our progress. I’ll also be writing about other areas where we’re prototyping AI’s suitability to fit into our workflow and what problems we plan to solve using it.