AI’s hype and antitrust problem is coming under scrutiny

Last Thursday, Senators Elizabeth Warren and Eric Schmitt introduced a bill aimed at stirring up more competition for Pentagon contracts awarded in AI and cloud computing. Amazon, Microsoft, Google, and Oracle currently dominate those contracts. “The way that the big get bigger in AI is by sucking up everyone else’s data and using it to train and expand their own systems,” Warren told the Washington Post

The new bill would “require a competitive award process” for contracts, which would ban the use of “no-bid” awards by the Pentagon to companies for cloud services or AI foundation models. (The lawmakers’ move came a day after OpenAI announced that its technology would be deployed on the battlefield for the first time in a partnership with Anduril, completing a year-long reversal of its policy against working with the military.)

While Big Tech is hit with antitrust investigations—including the ongoing lawsuit against Google about its dominance in search, as well as a new investigation opened into Microsoft—regulators are also accusing AI companies of, well, just straight-up lying. 

On Tuesday, the Federal Trade Commission took action against the smart-camera company IntelliVision, saying that the company makes false claims about its facial recognition technology. IntelliVision has promoted its AI models, which are used in both home and commercial security camera systems, as operating without gender or racial bias and being trained on millions of images, two claims the FTC says are false. (The company couldn’t support the bias claim and the system was trained on only 100,000 images, the FTC says.)

A week earlier, the FTC made similar claims of deceit against the security giant Evolv, which sells AI-powered security scanning products to stadiums, K-12 schools, and hospitals. Evolv advertises its systems as offering better protection than simple metal detectors, saying they use AI to accurately screen for guns, knives, and other threats while ignoring harmless items. The FTC alleges that Evolv has inflated its accuracy claims, and that its systems failed in consequential cases, such as a 2022 incident when they failed to detect a seven-inch knife that was ultimately used to stab a student. 

Those add to the complaints the FTC made back in September against a number of AI companies, including one that sold a tool to generate fake product reviews and one selling “AI lawyer” services.