How we use technology to detect harmful content
We use a multi-layered approach to protect our users from harmful content and conduct.
To detect and stop the spread of certain categories of known illegal and harmful image content, we deploy the hash-matching technologies PhotoDNA and MD5 on images and video content shared through Microsoft hosted consumer services and on content uploaded for visual image searches of the internet. A “hash” transforms images into a series of numbers that can be easily compared, stored, and processed. These hashes are not reversible, meaning they cannot be used to recreate the original images. We rely on the derogation permitted by European Union Regulation (EU) 2021/1232 as required, for use of hash-matching technologies to detect potential child sexual material in services governed by EU Directive 2002/58/EC.
We also may use machine-learning technologies like text-based classifiers, image classifiers, and grooming detection techniques to discover content or conduct shared through Microsoft hosted consumer services that may be illegal or violate our policies. Lastly, we leverage reports from users, governments, and trusted flaggers to bring potential policy violations to our attention. These various techniques are tailored to the features and services on which they are deployed, meaning we may not use all technologies on all services, nor in the same way on every service.
On some of our services, we also deploy tools to detect and disrupt the misuse of video-calling capabilities to produce and share child sexual exploitation and abuse imagery (CSEAI) by high-risk users. Microsoft uses several signals to identify high-risk users, including their past direct communication with users who were suspended or blocked for sharing CSEAI material. If a user is identified as high risk, and other signals are present, a bot is deployed into the live call. The bot uses artificial intelligence to determine whether a live video call contains CSEAI, in near-real time. If CSEAI is detected during the call, video capability in that call is disabled.
We find it
Others find it
You find it
Content review
Human reviewers consider images, video, messages, and context.
Policies
Microsoft content and conduct policies explain what is not allowed on our services.