The Cyber5 Podcast

EP46: The Cyber5 – Navigating the Complex Challenges of Trust and Safety Teams

Episode 46 | May 19, 2021

Episode 46 | May 19, 2021

In episode 46 of The Cyber5, we are joined by Charlotte Willner. Charlotte is the Executive Director of the Trust and Safety Professional Association. We will define what trust and safety means within organizations and how it differs from traditional cyber and physical security. We’ll focus on fraud and abuse of user-generated content on platforms and marketplaces of technology companies. Finally, we’ll discuss how security professionals can develop a career in trust and safety.

Here are the 5 Topics We Cover in This Episode:

1) Defining Trust and Safety: (02:20 – 04:30)

Trust and safety emerged from different disciplines within technology companies, including security and customer support. The security teams focused on how people were using the platforms for fraud or illicit financial gain. Customer support dealt with abuse by the users and the posting of inappropriate content (e.g., illegal narcotics or child sex exploitation). In the last 15-20 years, these two disciplines have converged to form the core of the trust and safety mission.

2) The Differences Between Fraud and Consumer User-Generated Content Abuse such as Disinformation: (04:30 – 09:17)

Fraud and abuse of user-generated content overlap considerably with trust and safety teams. Bad actors routinely use technology platforms to defraud individual users, especially within online marketplaces that deal with real-world spaces and objects. For example, Airbnb could combat fraud where an individual bad actor misrepresents listings and tries to take them off-platform to engage an individual to steal money from them. There could also be a scenario where that engagement is taken off-platform, and more violent criminal acts occur such as assault, physical theft, or carjackings.

User-generated content and fraud schemes also deal with the nature of truth. Someone impersonating a US military member asking for help and money is a pretty common user-generated scheme within platforms. When trust and safety teams have to pivot into addressing user-generated content that deals with disinformation, misinformation, and even equality issues, teams have to be adaptive to dealing with an appropriate response that is fair and right to all.

3) Addressing Risk Mitigation and Incident Response in Trust and Safety: (09:17 – 15:30)

When the barrier to entry is minimal or non-existent (platforms are free to use), trust and safety teams deal with thousands of problems a day, and prioritization is critical. Compared to other industries (finance, retail, manufacturing), the principles are the same: 1) Evaluate the quality of inputs, meaning evaluate the sources and access, and 2) Align with business principles and corporate values. These principles have become more focused due to the nature of moderating content that is equitable for all socioeconomic and political classes.

4) Metrics for Trust and Safety: (15:30-17:00)

Prevalence metrics are the gold standard in trust and safety. Once a threat is identified, building automations to find out how much of that threat is on the platform and could affect the platform is important. The caveat is if you can’t find the exact numbers of threatening events, you can approximate with simple search functions to drive a program and mitigations.

5) Building a career in Trust and Safety: (17:00-21:00)

The same principles of intelligence analysis are important for trust and safety. A sense of curiosity, integrity, and adaptability are critical skill sets as no day and problems will be the same. Entry-level positions are often content moderators who elevate through fraud or customer support and eventually rise into more senior positions that deal directly with threat actors to make them stop, including working with law enforcement. Specialized investigations, tool development, or a leader in trust and safety are often the professional development path.

Listen to other podcast episodes