Black Mirror: Joan Is Awful and Hollywood's Proposal to own the Likeness of its Actors
This summer Netflix released its sixth season of Charlie Brookers ‘Black Mirror’. An award-winning series beloved for its cynical take on our relationship with technology. The series creator, Charlie Brooker, has strained away from plots fuelled by future technologies for much of the new season, instead favouring the supernatural to develop new themes and nonetheless found pertinence in ongoing discussions regarding new and developing technology. Joan is Awful, the first episode of the new season reached our screens with uncanny timing. Not so much a prophecy as a protest. It follows the character, Joan, as she discovers that a streaming service is producing a series based on her life. In the tradition of Black Mirror, Brooker prods multiple points of discussion simultaneously; including but not limited to personal data, negativity bias, personalised content and Artificial Intelligence. We learn the series copying Joan’s life is entirely CGI and that the actor who plays her character in the ‘Steamberry’ series is not an actor but a digital likeness of an existing actor, licensed to the film company to do with as they like.
In July, the AMPTP floated a proposal that SAG-AFTRA Chief Negotiator, Duncan Crabtree-Ireland, vehemently opposed. The ‘groundbreaking’ AI proposal suggested ‘background performers should be able to get scanned,’ and ‘companies would own that scan, their image and their likeness.’ Without consent or compensation, studios would be able to use that likeness on any future project to “act” anything they wanted. Amid fears of AI threatening jobs, the idea that a background actor would be paid a low day rate to forfeit any future work with a production house is jarring, and likewise, is the minimal protection offered. Background acting has opened a door onto film sets, providing space to learn and gain experience acting, feeding an entire ecosystem of employment which includes, but is not limited to, hair and makeup artists, runners, production assistants and casting agents. All of these jobs are being threatened by the AMPTP’s proposal. For years, background actors have been scanned and duplicated to manufacture crowds or obtain otherwise impossible visuals, feeding the CGI for armies, life-threatening stunts, and bodies thrown from explosions. These scans have always been limited to the specific project the actor worked on, which the new proposal would change.
Online entertainment publication Collider, has given a platform to sources which allege studios have already begun to use the technology. ‘For a number of years including major productions from Warner Bros. Discovery, Marvel Studios and Netflix.’ The secrecy surrounding film sets and the involvement of NDAs makes allegations difficult to confirm, but the smoke itself is troubling. The Rolling Stones Magazine have also provided a voice to background actors who expressed ‘confusion as to what exactly they participated in when they were scanned’ and concern about how those images would later be used to train AI programs. The technology is not yet at a stage where it is able to create convincing full-body models; however, with no paperwork or information about what they were participating in, many actors remain concerned their images are still in the possession of the companies that took them.
One source, who has chosen to remain anonymous, spoke to Pi Media directly about their experiences, recalling a Disney+ production where they had their likeness scanned. They stated the process was not explained to them and they ‘learned what they were doing through the other extras.’ They were compensated £50 for participating in the scan but were given no paperwork to sign that explained what they were agreeing to. “They didn’t tell us about being scanned until the actual shoot day. We were waiting on a bus which they used as a holding area for extras and then some guys came in and asked for volunteers who would like to be scanned.” Eventually, in an effort “to try and round up everyone who hadn’t been scanned,” they stopped asking for volunteers and instead began to ask “who hadn’t been scanned yet.” Framed as just another aspect of the job, many background actors do not feel like they were given a choice.
It is now crucial to ensure that employers are acting ethically and with transparency so that actors understand what they are consenting to. Where Joan is Awful perhaps misses the mark is its ending. Two characters, axe in hand, smash the AI machine. Perhaps limited by the platform on which it is shown, the resolution to Joan’s problems is an external one. In reality, it is likely impossible to stop the use of innovative technology; but we can protect those whose work is being used. In the midst of the SAG-AFTRA strikes we are reminded that we cannot leave new technologies unchecked. We must demand a collective response to draw agreements and regulations that protect workers. We must demand fair treatment from the human employers who use the technology. There is no AI supercomputer to smash, but rather a corporate and systematic failure to protect actors that needs to be strictly regulated.