Cyacomb introduces Similarity Matching

Posted on 3 March, 2026 by Advance 

Edinburgh based digital triage experts, Cyacomb, today announced the availability of a new Similarity Matching capability within its Examiner Plus platform, enabling law enforcement to identify Child Sexual Abuse Material (CSAM) on mobile devices in minutes, even when images have been shared via messaging applications and altered from their original form.

Image courtesy Cyacomb

When images are shared online or through messaging platforms, such as WhatsApp, they are often compressed or stripped of metadata to speed up transmission. While this improves user experience, it also alters an image’s digital fingerprint, meaning it may no longer match known data sets. As a result, law enforcement agencies can face significant delays when attempting to identify illicit material on suspect devices.

Cyacomb’s new Similarity Matching capability tackles this challenge. Instead of relying solely on exact hash matching, the technology can identify known CSAM even after it has been shared, transcoded or modified, allowing investigators to rapidly detect material that would previously have required lengthy manual reviews or, more concerningly, been missed altogether.

Chris Johnson, CEO at Cyacomb said: “Traditionally, identifying images of child sexual abuse that have been circulated via messaging applications has been extremely difficult and time-consuming for law enforcement because hash values change during transmission. This often forces investigators to manually review devices and images, which can take months whilst also rising the risk of officer wellbeing. Our new capability changes that entirely, enabling identification in minutes and supporting faster triage, safeguarding and justice.”

The launch comes at a time of growing concern about the scale and complexity of online abuse. According to recent data from the Internet Watch Foundation (IWF), there were 312,030 reports of child sexual abuse material in 2025, the largest number ever seen and a 7% increase compared to 2024. The IWF has also reported a sharp rise in AI-generated abuse, identifying 3,440 AI-created videos of child sexual abuse in 2025, compared with just 13 the previous year, amounting to an increase of 26,362%.

This worrying trend is being reflected more widely across the technology sector. Generative AI tools, including high-profile platforms such as Grok, have faced increased scrutiny recently over the risks posed by the manipulation of imagery, with recent news coverage around the platform being  used to synthetically remove clothing from individuals.

While AI innovation continues to accelerate, it is also increasing the volume and variety of harmful content that law enforcement must contend with, which often has the ability to evade traditional detection measures. It is clear the data sets of known graded CSAM images is growing just as fast as the generation of new previously unidentified images.

Cyacomb’s Similarity Matching capability has been developed specifically to help agencies keep pace with these evolving threats.

Using Cyacomb’s proprietary approach, the technology can identify illicit imagery on devices even when they have been altered or shared repeatedly, ensuring offenders have no place to hide.