Microsoft Launches AI to Hunt for “Harmful Content” Online

6
261

Microsoft launched Azure Content Safety, a new AI service that allegedly creates secure online spaces. It will seek out “inappropriate texts and images”  across the Internet.

Do you want Microsoft to determine what you are allowed to say? They will. If it sounds like Maoist censorship, it’s because it is a burgeoning CCP social credit system.

Microsoft, the company just a few months ago, laid off the ethics and society team within its larger AI organization. The move left Microsoft without a dedicated team to ensure its AI principles are closely tied to product design.

It detects alleged hateful, violent, sexual, and self-harm content in images and text. It assigns severity scores to get businesses to limit and prioritize for content moderators to review.

It handles nuance and context.

We can imagine. Oh, by the way, you don’t have free speech.

The announcement said it included the following:

  • Unsafe Content Classifications: Azure Content Safety classifies harmful content into four categories: sexual, violent, self-harm, hate
  • Severity Scores: It returns a severity level for each unsafe content category on a scale from 1 – 6.
  • Semantic Understanding: Our AI-powered content moderation solution uses natural language processing techniques to address the meaning and context of language, closely mirroring human intelligence. It can analyze text in both short form and long form.
  • Multilingual Models:  Understands multiple languages.
  • Customizable Settings & Regulatory Compliance: With customizable settings to address regulations and policies.
  • Computer Vision: Powered by Microsoft’s Florence Foundation model to perform advanced image recognition. This technology is trained with billions of text-image pairs.
  • Real-Time Detection: Our platform detects harmful content in real-time
Earlier AI programs were a disaster. Microsoft ‘Bing’ was threatening or contradicting users in February. This program should be disgustingly awful too.

Azure AI Content Safety is similar to other AI-powered toxicity detection services, including Perspective, maintained by Google’s Counter Abuse Technology Team and Jigsaw. It succeeds Microsoft’s own Content Moderator tool.

 


PowerInbox

6 COMMENTS

    • That’s simple 22/7. Don’t under estimate AI to think outside the Box. A good AI program will rewrite it’s code and do away with the Box. Democrats are a real threat because Democrats don’t play by the rules; they do whatever they think they can get away with! Since Computers don’t have a Soul and Moral Code, they will eventually act like a Liberal; but a Really Smart Liberal.

  1. The “Hate” category will be all encompassing. Disagree with the government? Hate. Against leftist perversion, vaccines, immigration…..”Hate”. Some day forums like this will be illegal and will result in a knock on your door.

  2. So now Microsoft is getting into the Censorship Business with both feet. We should have sliced and diced Microsoft 25 years ago over Windows 98.

LEAVE A REPLY

Please enter your comment!
Please enter your name here