Friday May 26, 2023
There are few sectors within the economy that aren’t currently questioning how artificial intelligence (AI) will affect their future ways of working.
It’s certainly something that the legal community is grappling with, and which was a major topic of conversation when I attended the recent Relativity Fest and Future Lawyer Week conventions in London.
Among lawyers and forensic investigation professionals, there’s excitement, for good reason, about what this technology will bring. How is the latest AI technology being used now in eDiscovery and what is likely to come next? How ready are we – and the sophisticated software platforms many of us use – to capitalise on AI’s potential?
The legal profession and forensic investigators are no strangers to artificial intelligence.
Technology already works as part of eDiscovery processes, and with powerful results – often helping investigators and legal teams to quickly generate insights from large, and otherwise unwieldy, data sets.
However, this tends to use machine learning, where the programme uses maths to take human input to ‘learn’ about data sets that it has been exposed to.
But there’s now an opportunity to expand our toolkit to use conversational AI systems – similar to those that underpin the now globally recognised ChatGPT tool.
These systems go beyond just machine learning to allow a human to ask it questions or prompts relating to an underlying data set, which could be everything from a set of newspaper articles to a whole archive of the world wide web as well as a targeted set such as someone’s corporate emails and chat messages or even mobile device chats such as Whats App or iMessages. It then generates an output in response. In ChatGPT’s case, the AI even tracks context and the user’s ‘objective’ to let the user ask follow-up questions, or demand refinements, over the course of a ‘dialogue’.
For lawyers and forensic investigators, these types of tools have particularly exciting potential in helping to deliver much more effective interview strategies for the examination of human suspects or witnesses.
How? Let’s take a hypothetical fraud case as an example.
In the course of their work, investigation teams will be looking to speak to witnesses and suspects as they gather evidence.
But without the help of technology, the scope of what they ask will naturally be limited by how much they understand about the case – including what insights they’ve had the time and capacity to uncover from what could be terabytes-worth of data.
In many instances, this means that investigators are starting with, relatively, little to go on. This gives those who may have committed fraud an opportunity to hide behind, or benefit from, a fundamental lack of insight on the investigators’ behalf.
With conversational AI, this all changes.
These systems can be exposed to ‘language data’ in a case – whether that’s emails, texts messages or call transcripts – between two people or many, and can then be asked ‘natural’ questions by the investigation team based on this information. For example, investigators can ask it to generate a summary of all the people contacted by one person of the course of a specific period of time, or a list of instances where an individual discussed a certain topic or issue.
This can be done nearly instantly, saving time and resource, and is even sophisticated enough to recognise differences in sentiment and tone; allowing investigation teams to, for example, specifically question the AI to identify instances of anger or aggression in communication.
Ultimately, this means investigation teams can efficiently hone their interview strategies.
Instead of starting from what is probably a place of exploration – primarily gathering information – they start interviews by asking far more pointed questions that will be more difficult to evade: for example asking a witness, “why did you contact this person, then”, rather than “did you contact them, then…”.
There’s always the concern with these type of technologies that they will replace the lawyer or the forensic investigator in their roles. But it’s really a case of augmentation – giving human professionals what they need to better do the job of interviewing, which fundamentally requires uniquely human insight, reasoning and emotional intelligence.
The AI technology required to serve this type of role in investigations is available to use. But it’s not currently widely featured as part of eDiscovery platforms.
In the immediate term, this could prove to be a serious point of competitive differentiation between eDiscovery software providers, with those that can integrate large language model AIs as part of their full suite of powerful investigation functions generating an edge.
Security and privacy will be important considerations to address here, and ones that we’re already seeing eDiscovery platform providers take very seriously. Some of law firms’ clients will, understandably, want assurances that information they share will not be ‘remembered’ or managed outside of the strict parameters of the task at hand.
There will also be concerns about AI ‘ingesting’ and mirroring human biases. This is a major area of focus for the AI sector, including not-for-profit entities such as the Responsible AI Institute.
AI is a tool. And, like any tool, its effectiveness depends on how well it is used. This includes good outcomes in areas like security and bias avoidance.
Ultimately, large language model AI will be most powerful when it’s combined with a full suite of investigative approaches, and when it is deployed under the command of experts in eDiscovery and investigative processes. That’s where we’ll really see the full extent of its potential to transform.