Tech titan Microsoft has made a significant announcement, suggesting that they will handsomely reward users if they can outsmart or pinpoint flaws in their artificial intelligence (AI) products. This move offers financial incentives to those who can unearth any security gaps, with potential rewards ranging between $2,000 to a whopping $15,000, depending on the gravity of the identified flaw.
This audacious initiative by Microsoft aims to fortify the security of Bing’s AI products, presenting a formidable challenge to tech enthusiasts to detect potential vulnerabilities in AI operations.
In a recent blog update, the tech behemoth introduced a novel program named “Bug Bounty”. This program commits to compensating cybersecurity researchers with rewards varying from $2,000 to $15,000 for identifying “vulnerabilities” in Bing’s AI suite.
Predominantly, these vulnerabilities revolve around persuading the AI to produce responses that deviate from ethical barriers intended to block bigoted or inflammatory content.
For participation in the program, Bing users are mandated to notify Microsoft of any undetected security vulnerabilities, as per the company’s established criteria. These vulnerabilities should be categorized either as “significant” or “critical” for safety. Furthermore, researchers must have the capability to replicate these vulnerabilities through video demonstrations or written documentation.
The reward amounts will be dependent on the severity and quality of the reported issues. The most severe vulnerabilities, coupled with comprehensive documentation, will fetch the highest rewards. This represents an intriguing opportunity for AI enthusiasts to monetize their expertise.
Regardless of the timing of this initiative, it’s noteworthy that Microsoft has opted to tap into external resources for vulnerability research. Nonetheless, considering the enormous magnitude of its commercial deals, the maximum reward of $15,000 might seem relatively modest in comparison.
This initiative underscores Microsoft’s dedication to bolstering the security and ethical safety of its AI products. It elucidates their commitment to proactive responses in the AI realm, as per First Post.