MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Sunday, 22 December 2024

AI (in)credibility

The march of AI is making many think of technology in terms of a future overlord. Should we be afraid of AI, asks Mathures Paul

Mathures Paul Published 13.08.23, 12:10 PM
AI tools like Sudowrite, Writer, Reword and Mindsera are changing the way authors go about writing books

AI tools like Sudowrite, Writer, Reword and Mindsera are changing the way authors go about writing books

In May, the ‘Godfather of AI’ Geoffrey Hinton left Google. He worried that artificial intelligence would cause serious harm. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” he said about the “monster” the world is talking about.

AI has been everywhere the last many years — Siri, Alexa, Google Assistant and plenty more. Then came ChatGPT and the neighbourhood roadside stall changed its name to “ChaatGPT”. Is AI something to be scared of? And if it is, just how big are the fears? We look at four areas into which AI has made inroads.

ADVERTISEMENT

The fear: Death of an author

P.G. Wodehouse delivered scores of books, multiple short stories, plays and much more within 75 years. If you think that’s quick, ask a fledgling author about speed. Someone trying to eke out a living out of writing fiction for Amazon’s Kindle platform at any given time may have to wait only four-five months for a new book to get delivered. For authors, writer’s block is a luxury, taking a holiday is a luxury and even getting a bespoke suit could be a luxury.

Since writers need to move fast, the question that’s being asked is whether writing has a future. Czech-Brazilian philosopher Vilem Flusser predicted, in his highly perceptive 1987 book Does Writing Have a Future? that “only historians and other specialists will be obliged to learn reading and writing in the future”. This means the fear of AI is not new.

There are a lot of ways to look at how AI is transforming writing. If writing involves fleshing out a sentence with three adjectives, leave it to AI because Generative Pre-trained Transformer models can easily calculate the probability of a word being in the right place in a sentence. Frankly, putting in too many adjectives is poor writing. As GPT models feed on more text, it is becoming, well, perfect.

Actors and writers in Hollywood are striking for a number of reasons, one of the biggest being the march of AI

Actors and writers in Hollywood are striking for a number of reasons, one of the biggest being the march of AI Picture: Reuters

Writers — like soldiers — won’t die but they certainly have to sandpaper their skills, or prepare to succumb to AI tools like Sudowrite, which is the brainchild of sci-fi authors Amit Gupta and James Yu. Built on OpenAI’s language model GPT-3, it’s a tool meant for fiction writers who can copy-paste what they have already written, followed by highlighting some words. AI takes over and proposes twists and turns or generates more descriptions. “Write is like autocomplete on steroids. It analyses your characters, tone, and plot arc and generates the next 300 words in your voice. It even gives you options,” says Sudowrite in a note. Investors in Sudowrite include founders of Twitter, Medium, Gumroad, Rotten Tomatoes, WordPress and the writers/directors of Big Fish, Aladdin, Bourne Ultimatum, Ocean’s Twelve, and many more.

Another popular go-to tool is Writer, which is here to help businesses that have to produce large amounts of text. Its tools will help human writers deliver large amounts of iterative text, cutting down cost; this involves product descriptions, investor memos, blog-post headlines and whatnot.

Then there is Reword, which is like Writer, and has tools like topic idea generator, subheading generator and headline generator, besides producing text that has your voice. Mindsera, yet another app, is more like an editor than a writer. It turns the tables by generating questions based on what you have written, giving your writing a clear direction.

Will these tools make Amitav Ghosh jiggle like Creme Caramel? Not in the least. If I bonk my head with a hammer, it will be my foolishness that gives rise to a headache, not the tool. The thesaurus has been around for long, but have our vocabularies improved?

There are a lot of travel guidebooks floating around on different marketplaces, most of them coming with rave reviews. Many of them are AI generated… more like Wikipedia generated. The authors of many of these books don’t exist. We need to stop trusting everything we read.

A popular tech site called CNET has been caught publishing AI-written stories. After the development was spotted, it merely pressed pause. Has it stopped CNET from driving Google search results? No. At the moment it is reportedly pruning its vast archives so that fresh content hits Google’s SEO tools.

I am reminded of a moment from the movie Music and Lyrics in which Drew Barrymore’s Sophie Fisher reminds Alex Fletcher (Hugh Grant): “She was a brilliant mimic. She could ape Dorothy Parker or Emily Dickinson but stripped of someone else’s literary clothes, she was a vacant, empty imitation of a writer.” Sophie might well have been speaking about AI.

The fear: Robot on the ramp

C’mon, don’t expect a robot to walk the ramp. AI is having its moment among fashion designers but not in the way one would expect. In April, Spring Studios in New York hosted AI Fashion Week. Instead of the usual fare, 24 screens took command and displayed a never-ending stream of images created with various image software. Models walked the ramp on screen, through rainforests, deserts and whatnot while visitors milled around the studio with their drinks, taking pictures of the screens.

Take the example of Dmitrii Rykunov’s unisex collection. It showcased jackets and kimonos, trousers and trench coats. After reading clothing construction books, he logged in to Mid journey and gave the command prompt “detailed descriptions of the garments, including patterns”. Ultimately he created a collection titled Blooming Garden, which he would like to launch as a made-to-order operation. Basically, he fell in love with generative AI tools to come up with different patterns.

Also taking refuge in Mid journey is Rachel Koukal (graphic design and art director) whose collection is titled Soft Apocalypse, featuring a diverse group of mostly curvy models wearing a mix of futuristic body-con silhouettes with jackets and pants giving out techy vibes.

Actors and writers in Hollywood are striking for a number of reasons, one of the biggest being the march of AI

Actors and writers in Hollywood are striking for a number of reasons, one of the biggest being the march of AI Picture: Reuters

Tools like Mid journey and DALL-E can create new images in seconds and you can keep going back to an image to make changes. These AI tools work like chatbots, that is, you type in a text prompt and be specific.

Concerned? You shouldn’t be. Turn the clock back to 2017 when couturier Gaurav Gupta worked with IBM’s AI platform, Watson. He created a dress with white fabric and integrated lighting, which used AI to change colours depending on the mood of the person interacting with the wearer of the gown. According to the designer, it was IBM Watson’s tone and emotion-gauging software that was at play.Working with Watson also ensured a degree of transparency in the design process. The ingress of AI in the fashion industry is quite different from that of others. One of the biggest issues dogging the industry is waste because customers are difficult to satisfy. Before all the chat around AI surfaced, I spoke to Nivruti Rai in 2019 when she was country head, Intel India (she is now managing director and CEO of Invest India). She believed the fashion industry would be able to leverage technology. “In the US retail industry, merchandise returns per year run into several hundred billion dollars.

It is because when designers make clothes, they cut for an hourglass figure. To make small, medium and large versions all they do is just work around hourglass-figure designs but the shape remains the same. Not everybody suits the hourglass; one has to look at different shapes. Different fabrics have different stretching capabilities. If a person is on the heavier side, some fabrics will stretch and just fall but some make it look worse by clinging to the body. AI and computer vision technology can address the problem, saving a lot of the losses,” she told me. Whatever she said is now true. Using the latest AI technology waste is being cut down. Plus, AI is allowing designers to work on designs more easily… or at least innovate easily on the screen. Professor Calvin Wong Wai-keung has developed AiDA, short for AI-based Interactive Design Assistant for Fashion. The chief executive officer of AiDLab, a research centre established jointly by Polytechnic University and London’s Royal College of Art in 2020, has led a team to come up with software that allows designers to upload “mood boards” for an upcoming collection, provide a summary of inspirational sketches, themes, fabrics, the preferred range of colours and so on. Another part of the game is influencers. There are CGI models like Miquela Souza, Noonoouri, Shudu and Imma who have changed the way we look at social media influencers. Noonoouri, for example, “lives” in Paris and has collaborated with leading fashion brands like Dior and Stuart Weitzman.

There is sponsored content on her account and she “wears” luxury labels like Burberry and Valentino. Virtual supermodel Shudu was born in the studio of Cameron-James Wilson. Using a programme called Daz 3-D, he created the character in spring 2017 and she has found admirers in Alicia Keys and Naomi Campbell. What about Marc Jacobs? His fall 2023 runway show took place in New York Public Library and showcased 29 looks in three minutes. There was ChatGPT at play but not in the coolest of ways: It was used to draft the show notes. It was a moment when generative AI was rightly condemned for being underwhelming.

THE FEAR: THE DAY MUSIC DIESThe music industry most certainly has an AI problem. A few months ago, an AI-generated song titled Heart on My Sleeve was uploaded to major streaming services by an anonymous TikTok user. It was an instant hit because it features Drake and the Weeknd’s AI-generated voices.

To the casual listener, it sounds like the real thing but uses generative artificial intelligence technology to create familiar sounds. The same goes for the clip of Ariana Grande singing the Rihanna song Diamonds. Fact: Grande’s voice has been generated by AI, which has in its favour speed and scale, outcompeting human endeavour, even though the quality is poor.

Users have easy access to AI voice generators to clone celebrity voices, which has happened in the case of Harry Styles getting “featured” on the Taylor Swift song Cardigan. There is also an AI Kanye West doing Hey There Delilah. But Heart on My Sleeve has served as a wake-up call. Universal Music Group flagged the content on streaming platforms, citing intellectual property concerns. In a statement, UMG said: “Which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation.”

An AI-generated song titled Heart on My Sleeve was uploaded to major streaming services and it features Drake and the Weeknd’s AI-generated voices.

An AI-generated song titled Heart on My Sleeve was uploaded to major streaming services and it features Drake and the Weeknd’s AI-generated voices. Picture: Getty Images

Looking at some of the tools connected with AI music, in 2019, ByteDance (which owns TikTok) acquired the AI music platform Jukedeck, which allows users to alter music to match videos. Next year, Shutterstock bought “certain assets” of Amper, which is an AI music platform that auto-generated music based on parameters like mood, length, tempo and instrumentation. There are services like AIVA and Beatoven that give game developers, podcasters and content creators royalty-free musical backdrops. And let’s not forget Endel, a generative music app that creates personalised sound environments to match user activities. Courts and regulators have many questions before them about ownership when it comes to AI music. At the moment, protected intellectual property can be created by humans, but the grey area has to do with musicians collaborating with machines. Many years ago, musicians like Tom Waits and Bette Midler argued in court that they had a right to not just their musical compositions or recordings, but their voices.

They were in court to fight sound-alike imitators in advertisements. But that was in the US. What about using generative AI to replicate the voices of dead artistes, like that of Freddie Mercury? Speaking of voices, a recent eight-track album called AIsis: The Lost Tapes was released by the indie band Breezer. The group created the original music during lockdown in 2021 and then decided to bring in the voice of Liam Gallagher, who has turned out to be AI technology. Breezer, in a way, began when Chris Woodgates and co-writer Bobby Geraghty started writing songs in 2013. Covid lockdowns made them start the band. To get Liam’s voice, Geraghty took various a cappella recordings of Liam to train his AI model. “Our band sounded exactly like Oasis. So then all I had to do was replace my vocals with Liam’s,” he has told The Guardian. The 33-minute concept album basically reimagines the Oasis sound. Except for the AI voice, the songs are original and the music/lyrics are of Breezer’s. So are musicians obsolete? Not at all. AI can easily work on repetitive notes and that will affect most EDM musicians. Human singers can convey something AI can’t — emotions, unique styles and the ability to interpret lyrics. AI-generated music can produce intrigue but not human connections. And music is about creating human connections. Pop stars have charisma and talent that connect them with audiences on a personal level. AI can’t replicate that. AI and music go back a long way. Realtime Audio Variational autoEncoder, or R.A.V.E., is machine learning that musicians have experimented with since the 1990s. Remember Google Brain’s Magenta, a project that aims to have computers produce “compelling and artistic” music? New rules for the 66th Grammys state that music needs to have a significant degree of human authorship to be considered for the respected awards in their respective categories. “A work that contains no human authorship is not eligible in any Categories” the rules state. Long live Bruce Springsteen and Taylor Swift. We are sure they will continue to entertain.

THE FEAR: BOX-OFFICE TAKEOVER In the episode Joan Is Awful of Black Mirror, Salma Hayek’s character discovers that she gave away her likeness to an AI character, and all her bad traits and regrettable decisions are played out on screen. In a way, it says AI will replace actors. The fear of AI taking over Hollywood is real and the screenwriters of the Writers Guild of America (WGA) have been on strike for three months. They have also been joined by 160,000 members of SAG-AFTRA, the actors’ union. In simple words, Hollywood is closed for business. All of them are afraid that “artificial intelligence poses an existential threat to creative professions”.

AI is being used to write scripts and AI technology is being perfected to make videos. The next frontier for artificial intelligence is text-to-video. Runway, the generative AI startup that co-created the text-to-image model Stable Diffusion, has an AI model that can convert existing videos into new ones by applying any style specified by a text prompt or reference image. Called Gen-1, Runway’s website has a few examples of this new kind of video. Even though the examples are very short, the output is realistic, like that of an “afternoon sun peeking through the window of a New York City loft” or “a low angle shot of a man walking down a street, illuminated by the neon signs of the bars around him”. The team behind The Late Show with Stephen Colbert has used Runway software to edit the show’s graphics while the visual effects team behind Everything Everywhere All at Once used the company’s tech to help create certain scenes. Runway has also been used for Finneas’ Naked music video (VFX artist Evan Halleck has said: “There’s some hand-stretch stuff. Mainly to cut out so I could create a background.”) Meta Platforms and Google too are in the ring. Last September, Meta unveiled a system called Make-A-Video.

One look and you will know that the videos are machine generated but it represented a step forward in AI content generation. The clips were kept to a maximum of five seconds and didn’t contain audio. Meta CEO Mark Zuckerberg said in a post: “It’s much harder to generate video than photos because beyond correctly generating each pixel, the system also has to predict how they’ll change over time.” Google has cutting-edge ideas and systems in place. There is one that emphasises image quality while the other model prioritises the creation of longer clips. The high-quality model is called Imagen Video. Imagen is what’s called a “diffusion” model, generating new data by learning how to “destroy” and “recover” many existing samples of data. Imagen Video has been kept as a research project to avoid harmful outcomes. Another team of Google researchers, last year, published details about another text-to-video model named Phenaki, which can create longer videos that follow the instructions of a detailed prompt. Recently, Marvel’s Secret Universe, a Disney+ show, was criticised for its opening credits, which have been generated using artificial intelligence. The Marvel Cinematic Universe (MCU) owes its success to comic book writers and artists and that is now being questioned. The film’s director Ali Selim has confirmed that AI technology from a company called Method Studios has been used to come up with the opening sequence of the new series, featuring Samuel L. Jackson as Nick Fury. Method Studios is quite popular and has worked on Marvel shows like Ms. Marvel, Loki and Moon Knight. In a statement to The Hollywood Reporter, the company said that “no artists’ jobs were replaced by incorporating these new tools”. The Hollywood sign has recently turned 100 and it has managed to look beyond the Great Depression and two world wars, various strikes, technological disruptions and the pandemic is now facing another war, this time waged by AI.

Salma Hayek discovers she signed away the rights to her AI likeness in an episode of Black Mirror

Salma Hayek discovers she signed away the rights to her AI likeness in an episode of Black Mirror Picture: Netflix

The technology has already been deployed in films such as Indiana Jones and the Dial of Destiny to “de-age” its star Harrison Ford and the technology has also been deployed in the Netflix show The Irishman. Speaking at Cannes, actor Sean Penn extended support for writers by calling the use of AI in writing scripts a “human obscenity”.

THE FEAR: LOCK, STOCK, PAINT AND BARREL Before the AI fear set in, the last few years we have been playing with different apps that create avatars of boring us. These were simply AI-driven apps that wanted us to share more information about ourselves. And then we discovered DALL-E and Midjourney. Suddenly we were creating snowfall in Victoria Memorial. These photographs made for good optics on social media. Finally, reality set in — fear. Artists fear that AI-generated art will depreciate the value of art and they are protesting the fact that AI models are generating art based on existing artwork. In other words, IP theft. OpenAI, the company behind DALL-E (a nod to the 2008 animated movie WALL-E about an autonomous robot, and Salvador Dali, the surrealist painter), has powered the birth of several text-to-image generators, like Stable Diffusion and Midjourney. For at least five-six years, AI labs have been working on systems that could identify objects in digital images and even generate images on their own, like flowers, dogs, cars and faces. Next, systems were built that could achieve the same with written language, summarising articles, answering questions and even writing blog posts. The technologies have been combined to create the new form of AI.

It uses something like a neural network that learns skills by analysing large amounts of data. By training to pinpoint patterns in a million photos of an avocado, it can learn to recognise an avocado. If the AI system is asked to create an image of an avocado next to a trumpet, it will generate images that include key features it has learned during training. Next, another neural network called a diffusion model creates the image and generates the pixels needed to realise these features. Everything happens in a matter of seconds. Since a lot has already been written on the subject, I will keep it short and point to what is called Alejandro Jodorowsky’s ‘Tron’. The visionary Chilean film-maker never tried to make the film Tron. But Canadian director Johnny Darrell used the AI programme Midjourney to create variations on production stills from Alejandro Jodorowsky’s film Dune (1976).

Shudu Gram is a CGI model and social media influencer.

Shudu Gram is a CGI model and social media influencer. Picture: Shudu Gram

With some AI sorcery, Darrell has given the world photographs that contain elements found in movies like Star Wars to Alien. As a nod to the great director, he calls it Alejandro Jodorowsky’s ‘Tron’.

Follow us on:
ADVERTISEMENT
ADVERTISEMENT