Trace Id is missing
Moderation and enforcement

Content detection

How we use technology to detect harmful content

We use a multi-layered approach to protect our users from harmful content and conduct.

To detect and stop the spread of certain categories of known illegal and harmful image content, we deploy the hash-matching technologies PhotoDNA and MD5 on images and video content shared through Microsoft hosted consumer services and on content uploaded for visual image searches of the internet. A “hash” transforms images into a series of numbers that can be easily compared, stored, and processed. These hashes are not reversible, meaning they cannot be used to recreate the original images. We rely on the derogation permitted by European Union Regulation (EU) 2021/1232 as required, for use of hash-matching technologies to detect potential child sexual material in services governed by EU Directive 2002/58/EC.

We also may use machine-learning technologies like text-based classifiers, image classifiers, and grooming detection techniques to discover content or conduct shared through Microsoft hosted consumer services that may be illegal or violate our policies. Lastly, we leverage reports from users, governments, and trusted flaggers to bring potential policy violations to our attention. These various techniques are tailored to the features and services on which they are deployed, meaning we may not use all technologies on all services, nor in the same way on every service.

On some of our services, we also deploy tools to detect and disrupt the misuse of video-calling capabilities to produce and share child sexual exploitation and abuse imagery (CSEAI) by high-risk users. Microsoft uses several signals to identify high-risk users, including their past direct communication with users who were suspended or blocked for sharing CSEAI material. If a user is identified as high risk, and other signals are present, a bot is deployed into the live call. The bot uses artificial intelligence to determine whether a live video call contains CSEAI, in near-real time. If CSEAI is detected during the call, video capability in that call is disabled.

We find it

We have invested in and grown our “classifier” capability to help us proactively detect violations of our policies in certain contexts. A “classifier” is used to recognize and label harmful material based on a predefined policy, like violent extremism and terrorism. In addition, we have developed ways to detect content that is likely malicious such as scams or viruses on our services.

Others find it

Trusted flaggers may identify to Microsoft potentially illegal or harmful content, conduct, or URLs on our services so that we can consider whether action is needed. Trusted flaggers are designated organizations that work with Microsoft either by agreement or as may be required by law. Material reported by trusted flaggers is reviewed against our terms and policies, just like all other reports, and then actioned accordingly. We do the same if any government or law enforcement agency reports a potential violation or requests that we remove content or URLs in line with local laws.

You find it

We encourage you to tell us if you have seen or experienced something on a Microsoft service that you are concerned about. Providing detailed information about the content, conduct, or other item, including where to find it (such as a link to the content or a copy of an email) will help our reviewers evaluate and take action if it violates our terms and policies. Most products and services have options within the product experience for you to report concerning content or other users’ conduct. You can also report concerns. Be careful when reporting that you don’t send illegal material. Use links to the content you’re reporting so we can find it, and read the form carefully to make sure you provide all the information requested.

Content review

Human reviewers consider images, video, messages, and context.

Policies

Microsoft content and conduct policies explain what is not allowed on our services.