Bridging the gap: Unveiling the hidden threats between content moderation and deep investigation


Trust and Safety is a new, developing field, and in many ways it always will be. Threats to users, platforms and the public will always loom large and cast their shadow over the internet.

Trust and Safety teams, comprised primarily but not exclusively of the front line of content moderators, do their best to harness AI-powered content detection systems and keep their platforms safe, clean and positive. Simply removing content, however, will never suffice, as content moderation systems will never be 100% effective, and bad actors of all stripes do their best to outsmart and evade enforcement efforts.

Large platforms and firms also often have specialized threat intelligence teams embedded in their wider Trust and Safety or Information Security departments which handle threat hunting, deep investigation and remediation of serious threats. These teams often act independently from content moderation or even policy teams and focus usually on organized crime, malign nation-state hacking groups, terrorist or extremist organizations or other high-level, coordinated efforts to exploit platforms.

These two broad categories cover the polar opposite of threats: content moderation covers low-hanging fruit and easily actioned/enforced content, with threat intelligence teams investigating serious, organized threats. Infrequently doth the twain meet, with many small to mid-level threats operating openly.

Small to mid-level threats operating in the area between content moderation and deep investigation are a huge issue, as Trust and Safety teams must escalate any illegal or otherwise illicit content to specialized internal teams, law enforcement, NGOs or any other relevant partners to ensure that such content is handled appropriately.

This is only increasingly true not only as a good business practice, but is now even mandated in the European Union under the DSA (Digital Services Act -

Countering these threats is no easy task, as basic investigation, enrichment and automated reporting processes are key to handling their massive scale.

Providing basic investigation and enrichment is also key as it provides actionable indicators for law enforcement partners to respond.

Providing law enforcement with minimal information often leads to a late response as the first responders themselves have to carry out the initial stages of investigation to uncover relevant information about the situation.

Essentially, what Trust and Safety teams need is the ability to identify, enrich, report and escalate these cases with actionable intelligence to law enforcement or other relevant partners.

Here’s where Falkor comes in.

Falkor streamlines the entirety of the escalation process: from connecting via APIs to flagged and moderated content databases and workflows, to fusing disparate sources of data and enriching with open-source intelligence tools to drive actionable insights, and automatically generating reports for escalation, Falkor enables T&S escalation analysts to quickly and easily enrich flagged entities with external data sources.

More resources