The Download: Rethinking AI benchmarks, and the ethics of AI agents

Every time a new AI model is released, it’s typically touted as acing its performance against a series of benchmarks. OpenAI’s GPT-4o, for example, was launched in May with a compilation of results that showed its performance topping every other AI company’s latest model in several tests.

The problem is that these benchmarks are poorly designed, the results hard to replicate, and the metrics they use are frequently arbitrary, according to new research. That matters because AI models’ scores against these benchmarks determine the level of scrutiny they receive.

AI companies frequently cite benchmarks as testament to a new model’s success, and those benchmarks already form part of some governments’ plans for regulating AI. But right now, they might not be good enough to use that way—and researchers have some ideas for how they should be improved.

—Scott J Mulligan

We need to start wrestling with the ethics of AI agents

Generative AI models have become remarkably good at conversing with us, and creating images, videos, and music for us, but they’re not all that good at doing things for us.

AI agents promise to change that. Last week researchers published a new paper explaining how they trained simulation agents to replicate 1,000 people’s personalities with stunning accuracy.

AI models that mimic you could go out and act on your behalf in the near future. If such tools become cheap and easy to build, it will raise lots of new ethical concerns, but two in particular stand out. Read the full story.

—James O’Donnell