An AI script editor could help decide what films get made in Hollywood

Today it launched a new tool called Callaia, which amateur writers and professional script readers alike can use to analyze scripts at $79 each. Using AI, it takes Callaia less than a minute to write its own coverage, which includes a synopsis, a list of comparable films, grades for areas like dialogue and originality, and actor recommendations. It also makes a recommendation on whether or not the film should be financed, giving it a rating of “pass,” “consider,” “recommend,” or “strongly recommend.” Though the foundation of the tool is built with ChatGPT’s API, the team had to coach the model on script-specific tasks like evaluating genres and writing a movie’s logline, which summarize the story in a sentence. 

“It helps people understand the script very quickly,” says Tobias Queisser, Cinelytic’s cofounder and CEO, who also had a career as a film producer. “You can look at more stories and more scripts, and not eliminate them based on factors that are detrimental to the business of finding great content.”

The idea is that Callaia will give studios a more analytical way to predict how a script may perform on the screen before spending on marketing or production. But, the company says, it’s also meant to ease the bottleneck that script readers create in the filmmaking process. With such a deluge to sort through, many scripts can make it to decision-makers only if they have a recognizable name attached. An AI-driven tool would democratize the script selection process and allow better scripts and writers to be discovered, Queisser says.

The tool’s introduction may further fuel the ongoing Hollywood debate about whether AI will help or harm its creatives. Since the public launch of ChatGPT in late 2022, the technology has drawn concern everywhere from writers’ rooms to special effects departments, where people worry that it will cheapen, augment, or replace human talent.  

In this case, Callaia’s success will depend on whether it can provide critical feedback as well as a human script reader can. 

That’s a challenge because of what GPT and other AI models are built to do, according to Tuhin Chakrabarty, a researcher who studied how well AI can analyze creative works during his PhD in computer science at Columbia University. In one of his studies, Chakrabarty and his coauthors had various AI models and a group of human experts—including professors of creative writing and a screenwriter—analyze the quality of 48 stories, 12 that appeared in the New Yorker and the rest of which were AI-generated. His team found that the two groups virtually never agreed on the quality of the works. 

“Whenever you ask an AI model about the creativity of your work, it is never going to say bad things,” Chakrabarty says. “It is always going to say good things, because it’s trained to be a helpful, polite assistant.”

Cinelytic CTO Dev Sen says this trait did present a hurdle in the design of Callaia, and that the initial output of the model was overly positive. That improved with time and tweaking. “We don’t necessarily want to be overly critical, but aim for a more balanced analysis that points out both strengths and weaknesses in the script,” he says.