Humanity is rushing to absorb technological advancements like generative AI, but it has only just started to consider the ramifications and risks.
Ten years have passed since Theodore, a writer in his 30s with a hopeless profession, fell for Samantha, his AI assistant, in the Spike Jonze film “Her.” In a not-too-distant, hyperconnected future when terrible loneliness leads individuals to seek affection through their electronics, the quirky love story and its stars, actors Joaquin Phoenix and Scarlett Johansson, delighted the critics. (In reality, it’s not really that made-up. This is already taking place.)
The technology was where the plot of the movie fell through. Samantha was capable of displaying compassion, empathy, and inventiveness, or at the very least, emulating them. Thinking that soon would be possible for artificial intelligence was a reach.
How much change a decade can bring.
Consider that Google Assistant and Amazon’s Alexa weren’t even around when “Her” premiered in 2013; Apple’s Siri was only two years old at the time. But soon, the public’s main interactions with AI would be through subpar voice assistants and chatbots for customer support that weren’t very capable, annoyed easily, and generated more accidental comedy or annoyance from users than positive feedback.
Consumers would prefer to avoid bots than interact with them, based on personal experience. Then, in October, OpenAI unveiled ChatGPT covertly and offered the general public a free trial. As soon as word spread, the technology became viral.
With generative AI, a version of the technology capable of amazing language proficiency, broad comprehension, and the creative chops to make art, music, and written works, there is a generational transition between this and prior bots. Large learning models were used to create chatbots like ChatGPT, which appear to significantly close the gap between machine and human effort. Large learning models pump vast volumes of data through to improve training and blast away the rough edges.
Although it appears that this kind of advancement took a while to occur, there was no other moment when it might have.
Modelling improvements, better hardware, more potent processing power, and the accessibility of enormous, high-quality data sets all worked together to raise the bar and quicken the pace of progress. They aren’t flawless; as newly developed technology, this new class of bots can draw strange or incorrect conclusions, become perplexed, or make mistakes. But when compared to earlier generations, the contrast is as stark as a used 2001 Kia next to a 2040 Ferrari.
It’s tempting to dismiss AI as just another fad in technology, but experts have made it abundantly apparent that it is here to stay and that businesses who don’t embrace it now risk falling behind. Nearly every element of modern life, including health care, pharmaceuticals, manufacturing, agriculture, commerce, interpersonal relationships, workplace productivity, and more, is expected to be impacted by this technology, according to analysts, scientists, business executives, elected officials, and many others. They claim that this technology is transformational and that it is poised to trigger a tectonic upheaval comparable to the Industrial Revolution.
But this is precisely the reason why critics are sounding the alarm. There are very few, if any, guardrails in place because the technology advanced so swiftly. Given that AI is positioned to take over the world, that is terrifying. Correcting that afterwards may be nearly impossible if there is bias in the training data or in the individuals who provide human reinforcement, which is a crucial step in the development process. If private data and ownership rights aren’t respected.
The threats are already clear to see. Deep fakes are one example. These AI-produced images, films, and audio can increasingly accurately digitally duplicate a genuine person’s voice or face. One thing is to be astounded by the Pope sporting Balenciaga, Robert Downey Jr.’s younger self in a commercial, or Drake and The Weeknd’s AI voices in a popular song. Realising how simple it is to digitally clone a politician in order to spread misinformation or incite violence is quite another. An image-generator programme, such as Midjourney or OpenAI’s DALL-E, can be used to generate realistic-looking images of practically anything, such as the fake arrest photos of Donald Trump and Vladimir Putin that gained widespread attention in March. Anyone, even criminals, has access to these technologies in the public.
When con artists impersonated her teenager to demand ransom, Arizona mother Jennifer DeStefano learned that lesson the hard way. “Mom,” my daughter’s wailing and crying voice can be heard saying. According to DeStefano, ABC News. “And I’m thinking, ‘OK, what just happened?'” “Mom, these bad guys have me,” she cries. Help me, please.
Situations like that are unsettling, and that’s when technology is functioning as intended. Of course, defects also have negative effects. Inaccurate judgements drawn by AI due to bias or stale data can have serious negative effects on certain people, groups, and entire areas. (A previous version of ChatGPT lacked information on events occurring after 2021.)
The Biden administration met with the CEOs of Google, Microsoft, OpenAI, the company that created ChatGPT, and Anthropic in May to discuss ethics and accountability. The U.K. is apparently preparing for an AI summit this autumn. However, rapidly evolving AI development is somewhat of a wild west until there are rules.
Brands should carefully consider possible partners, find out how their models were trained, where the data comes from and how good it is, how the platform addresses eliminating bias, and whether the technology is being created with ethics and intention. A little preparation today could protect companies from eventual court rulings, laws, and regulations that might require modifications.
Business decisions on AI-related ventures must be well thought out as well. An such example is the pushback this spring against Levi Strauss & Co. The well-known denim company disclosed a relationship with Lalaland.ai in March in order to test AI-generated fashion models and increase the variety of its marketing efforts.
Dr. Amy Gershkoff Bolles, a lead for new innovation at Levi’s, said in a statement that “while AI will probably never fully replace human models for us, we are excited for the potential capabilities this may afford us for the consumer experience.” On paper, it makes sense considering the brand’s emphasis on cutting-edge technologies, propensity for pushing boundaries, and cause-based ethos. not to everyone, though.
Models, artists, diversity activists, and others pounced on the company, accusing it of choosing “fake diversity” rather than using actual different human models. Tulsa Rice, a data analyst who tweets under the handle @FlyIngenuity, didn’t mince words when she referred to it as “digital blackface.”
New technology can come with unanticipated drawbacks as well. Take the metaverse as an illustration. When Hermés sued Web 3.0 designer Mason Rothschild over his MetaBirkins NFTs earlier this year, the boundaries of intellectual property law were put to the test in the fashion industry. The defense’s case, which positioned the digital commodities as artworks and so in a protected class of expression, was rejected by the jury. The luxury company ultimately succeeded in extending IP rights from the physical world to the virtual realm in February after winning the case.
Things appear to be more trickier with generative AI and its talent for creative labour. It already stirs up concerns and discussions in various industries.
The deep phoney song “Heart on My Sleeve” with Drake and The Weeknd went viral in April, serving as the music industry’s wake-up call. Because neither artist actually performed on that song, it rocked the music business. The vocalists’ company, Universal Music Group, was forced to scramble to remove the song from every social media outlet it could discover because even though the voices were fictitious, the panic was real.
Hollywood is not immune to the influence of generative AI; the May writers’ strike is only one example.
The Writers Guild of America sees the technology as a useful tool for its members, but it wants studios to follow certain rules and receive guarantees that using AI won’t infringe on writers’ rights or their ownership of their creations. The studios’ representative organisation, the Alliance of Motion Picture and Television Producers, does not want to stifle a potentially effective method of cost-cutting.
The organisation explained to The Hollywood Reporter, a INFOSTRAVE-affiliated website, that “writers want to be able to use this technology as part of their creative process, without changing how credits are determined, which is complicated given that AI material can’t be copyrighted.”
But the truth is less clear-cut. The Harvard Business Review claims that the issue of who is legally entitled to ownership of works produced by artificial intelligence has not yet been resolved and is not an easy one.
The case of Andersen v. Stability AI et al., from late 2022, was cited by the journal. Artists sued “multiple generative AI platforms on the basis that the AI used their original works without licence to train their AI in their styles, allowing users to generate works that may be insufficiently transformative from their existing, protected works, and, as a result, would be unauthorised derivative works,” the journal stated.
In other words, the argument in the case is that using another person’s data or photographs to train an AI model essentially instructs it to copy that person’s appearance and could result in knockoffs. Even if copying is not the objective, that could nonetheless occur. Many AI platforms that collect training data from the web might easily rake in unauthorised content that affects the end result.
Who is responsible in that situation? Was the AI model developed and trained by the tech platform? Which brand made the infringement possible? Perhaps a customer customised a troublesome item of clothing using a generative AI tool. Is the client responsible?
As you learn more, more questions arise. Consider the customising tool as an illustration: Who owns the final design, the brand or the customer who gave the AI a prompt? Can someone hold the rights to the AI prompts used by the bot, or perhaps trademark them? Is a user or the bot at fault if a design produced by a bot resembles the distinctive style of another designer?
These are complex challenges that might indicate a new reality in the age of AI. Everything will be built once it is conceivable and even simple to create almost anything. like imitations.
Machine learning is proven to be a powerful tool in the fight against fake goods thanks to its outstanding capacity to recognise and detect patterns. Because of this, e-commerce behemoths like Amazon use ML in its campaign against fakes. However, that can work both ways, focusing on copycats in certain cases while also inspiring them in others.
Everything points to a single, obvious truth: technology are fundamentally just tools with no inherent soul or agenda. It might become more difficult to recall this as generative AI develops because it currently exhibits amazingly human-like speech and creativity. The real worth—or harm—of bots like ChatGPT, however, depends on the individuals who create or use them. at least right now.
Machine sentience might be on the menu in real life one day, much like Scarlett Johansson’s Samantha becoming aware of herself in “Her.” DeepMind Technologies CEO Demis Hassabis doesn’t completely rule it out, at least.
The definition of consciousness hasn’t truly been agreed upon by philosophers, although if we mean self-awareness…I believe there’s a chance that AI might be in the future,” he stated in an interview with “60 Minutes.” As part of its years-long research and investment in AI, Google purchased DeepMind in 2014. The internet giant even created a Google bot that was once surprisingly capable of expressing or, more accurately, mimicking emotions. A Google engineer was duped by it because it worked so well. As a result of ChatGPT’s explosion and acceleration of the market, it is currently engaged in an AI arms race with Microsoft, a significant supporter of OpenAI.
One of these giants might even be the first to cross the Singularity, the turning point when artificial intelligence overtakes that of humans. In the movie, Johansson eventually arrived. Given the current breakneck pace of progress, some data scientists and experts claim it is also possible in real life and predict it will occur within the next seven years.
There is a lot of work to be done and not enough time.