Internet users have been asking questions to ‘generative artificial intelligence’ tools, such as Google Bard and OpenAI’s ChatGPT, since their general release between November 2022 and March 2023. This is just the surface, however, of a much deeper and much more capable technology that has emerged into public use over the past year. So, what’s it all about? Why is everyone talking about it and why should you care?
In this article we will take a brief look back in time to understand how AI got where it is today, then consider some important current issues as AI becomes a more regular tool for many businesses and individuals.
What were you doing in 1997?
Artificial intelligence (AI) has been through a turbulent history over the past 75 years, since the Turing Test first suggested in 1955 that machines have the potential to be intelligent. It took various success stories and some failures in both computing and robotics through the 1960s-1980s to get to some significant breakthroughs in the 1990s. IBM’s Deep Blue supercomputer was the first to hit the news headlines: the strongest chess computer ever built, it beat world chess champion Garry Kasparov in under 40 moves (a rarely mentioned aside is that Kasparov asked for a rematch, but IBM declined). The news spread globally and the excitement — and anxiety — around a future of intelligent machines started to build.
Things began to pick up. The Massachusetts Institute of Technology launched an ‘emotionally intelligent’ robot that recognised and reacted to human audio and facial expressions — and we got our first Roomba, an intelligent vacuum cleaner that navigated and cleaned rooms in your home. By 2011, the big tech brands had joined the party — Apple launched Siri, its intelligent virtual assistant, and a few years later Amazon launched its smart speaker Alexa, who could shop online.
By early 2023, two of the big tech companies in the west had launched their universally accessible and user-friendly chat tools that brought AI to the masses.
But while consumer sentiment was all very positive about machines that could help in daily lives, one story got many thinking beyond the day to day. Eugene Goostman, a chatbot developed by a group of three programmers in 2014, not only won the largest ever Turing test contest (a test of a machine's ability to display human behaviour) at the time, it also fooled 29% of judges that it was human. Human minds began thinking — and worrying, as we do — what might happen if machines got more intelligent?
Keep in mind that all the time early AI machines were being taught to learn from reading data and information, the internet was quickly becoming a global free-for-all repository of (mostly unregulated) content. And as we discovered with Search Engine Optimisation (SEO) when the likes of Google Search took marketing and communications teams by storm in the late 1990s, the more people (and businesses) ask questions and provide answers, the quicker you can find an answer to pretty much anything. Now combine this with an intelligent machine that can read the entire internet and respond with a relatively human conversation: enter the era of generative AI.
The world at our fingertips
By early 2023, two of the big tech companies in the west had launched their universally accessible and user-friendly chat tools that brought AI to the masses — Microsoft launched Bing Chat and Google launched its platform Bard. These were beaten to public release by OpenAI (developer of ChatGPT), founded by a consortium of six Silicon Valley stalwarts, of which Elon Musk was one (though he resigned from the board in 2018). And despite Microsoft’s huge US$1 billion investment and 49% ownership stake in the company, OpenAI continues to operate as a private entity — projecting a non-profit, research-driven brand that “benefits all of humanity.”
Multiple studies have proven that bias and stereotyping are significant issues.
This was the point at which many businesses and creators changed the way they think about AI. What was at first a novel conversation tool, or a means to find answers to questions that otherwise you’d have to read Google’s best bet from search results, was now able to learn throughout a conversation and provide insights using a wide variety of sources. Businesses started asking questions about reputation — could past controversies be rediscovered? Was information about their company correct and up to date? What if one of their employees entered confidential business information and it was used in a response elsewhere in the world?
As you can imagine, things can go wrong. Firstly, remember that by asking questions and providing information to Generative AI tools such as Bard or ChatGPT, you are teaching it — feeding it data that it will store and reference in future. If you’re a business owner and you don’t already have an internal AI usage policy, I suggest addressing this quickly. View content entered into AI tools as the equivalent to posting in the public domain.
Secondly, while it’s great that more and more people are finding new ways to use these tools, remember that their data bank is the entire internet. And a vast amount of content on the internet is unverified. Multiple studies have proven that bias and stereotyping are significant issues, especially in image generation AI tools, with one recent study¹ showing that 97% of 96,000 images generated white males when given prompts like ‘CEO’ or ‘director’ and images of cashiers and housekeepers skewed towards women of colour. Other issues have emerged around copyright of AI-generated content. Meanwhile ‘deepfake’ content has become a powerful weapon in the wrong hands, as Sir Keir Starmer recently discovered when he became a victim of a deepfake audio clip falsely purporting to show him swearing at staff members.
In a similar way to refining a Google Search presence, now more than ever, we must all be aware of our digital footprint.
While governments and regulators across the world are exploring ways to manage the potentials of publicly accessible AI tools — and the businesses behind them — we should not be afraid of a future alongside AI. But we must also not ignore the issues right now. In a similar way to refining a Google Search presence, now more than ever, we must all be aware of our digital footprint.
So, what’s next?
The second half of 2023 has seen multiple iterations of these generative AI tools — each one bringing more integrations and wider possibilities. Microsoft Copilot is becoming a standard operating tool in many businesses, covering a wide variety of admin tasks and giving creative and strategic human minds more time to think. AI-powered chat features have taken over voice apps such as Siri and Alexa. AI automation tools are streamlining how brands and influencers create and publish content. And more recently, custom generative pre-trained transformers (GPTs) - a type of artificial language model used to create chatbots - are allowing developers and creators to build entirely new AI tools that will fuel bigger and brighter ideas still. It’s like a virtuous technology developing circle. Like growing a plant in a garden that produces seeds that grow into new and better plants (a metaphor provided by Google Bard to describe the idea).
It's difficult to predict where this will ultimately go. It’s already apparent that AI technology requires massive amounts of energy to power, which presents a real-world threat to our planet. Another possibility is that GPT-powered AI tools could become so powerful that they can automate all human creativity and innovation. Alternatively, and more realistically, new AI tools could become so widely used that they become ubiquitous - meaning that everyone would be able to access them and no new tools would be needed.
Whatever happens, AI is certainly here to stay and will only become more integrated into daily life over time. Are you ready to embrace it?
All views expressed are those of the author and are presented for information purposes only. They should not be considered a recommendation or solicitation to buy or sell any products or securities.
¹ Stable Bias: Analyzing Societal Representations in Diffusion Models (arxiv.org)