Blog: What Cambridge Analytica means for AI in claims

30 Apr 2018

Andrew Dunkley blogged recently for Post Magazine about his view on the implications of the recent Cambridge Analytica issue.

The ongoing Facebook / Cambridge Analytica saga poses serious questions for those of us currently working to develop legal artificial intelligence solutions for clients in the claims and insurance sectors. This was not a hack in the purest sense – and it is not clear whether Cambridge Analytica even broke Facebook’s terms and conditions.

To what extent is it acceptable to use data about a person to manipulate their behaviour, even if that data is obtained from public sources without their explicit knowledge or consent? Imagine a client seeking insight from a lawyer on the best strategy to defend a claim. The lawyer will draw upon their experience of numerous similar cases they have seen in the past. They may also do research about the claimant through the web, trying to find out what is likely to motivate the other side so they can better advise their client. From this, the lawyer will design a strategy to persuade the claimant to settle the dispute at the lowest possible value. In other words, the lawyer is using their experience to ‘manipulate’ the claimant’s behaviour for their client’s ends.

Now imagine, if you will, the client seeking comparable insight on strategy through the use of AI. That system will also draw upon outcomes and insights from numerous similar cases it has seen in the past. It can be programmed to use insights about the client from third-party sources. Its ultimate recommendation will also try to induce the claimant to settle the dispute on terms favourable to the defendant. Is there really a difference between these two scenarios? While we’re thinking about that, let’s think about a few more hypothetical situations:

  • A claimant alleges that they became sick after eating food provided to them on a package holiday; a human lawyer looks at their Facebook profile to see whether their holiday photos suggest otherwise
  • A  lawyer finds out from a claimant’s Linked In profile that they are unemployed, and discovers useful reasons for making a lower settlement offer on the basis that the claimant probably needs the money
  • As part of an anti-fraud initiative to prevent shadow broking, an insurer manually reviews the Facebook friends of motor policyholders where a second driver has a different surname

None of these situations involves AI or data scraping in a traditional sense – in each case, a person is using the internet and social media to find publicly available information about another person. That information is then used to inform a decision that impacts upon the life of a human being in circumstances where their rights and data subject are not taken into account or made the subject of an opportunity to intervene and change that calculus. While one or more of these examples might make us feel a bit uncomfortable (especially taking advantage of someone’s unemployment), it’s harder to say that evidence gleaned using these methods shouldn’t be allowed.

Cambridge Analytica was using Facebook data harvested at scale to influence behaviour. However, we can see from these examples that you don’t need millions of data points to do this. It can work at a micro level as well, only needing the input of a person with a web browser. This means that we need to understand whether the problem is the principle of using external third-party data without the knowledge of data subjects, or simply using technology to do it at scale.

The question is: how far we can use this technology before we cross a moral or legal line? If we accept that trying to manipulate the behaviour and actions of the other side is a fundamental part of lawyering, to what extent can we use scalable AI to the same ends?

This field is evolving exponentially, with new developments each month. However, it is vital that those of us trying to push technological boundaries also think hard about ethics. We need to aspire to a higher standard than ‘this is strictly legal’ when designing claims AI tools. Facebook and Cambridge Analytica forgot this. We make the same mistake at our peril.

<< Back

Disclaimer: This document does not present a complete or comprehensive statement of the law, nor does it constitute legal advice. It is intended only to highlight issues that may be of interest to customers of BLM. Specialist legal advice should always be sought in any particular case.

Who to contact


For more information about any of our news releases, please contact:

Natalie King
 +44 20 7638 2811
+44 20 7920 0361
Email Natalie

Fi Khan
+44 161 236 2002
+44 161 838 6324
Email Fi

Jo Murray
+44 20 7638 2811
+44 20 7865 4849
Email Jo

|