Skip to main content

Artificial intelligence is nothing new in the tech space, but understanding it’s applications in day to day work/life has felt a little out of reach for many of us. In fact, even though it’s the buzz of the tech world right now, its true impact remains fuzzy at best.

AI has been quietly helping and guiding our daily lives for many years. From auto-piloting Teslas and advertising on social media, through to discovering disease and banking security. Digital assistants like Alexa and Siri have become ubiquitous sources of information in homes and offices. But while AI has come far, it still can’t fully understand things that come naturally to humans – creative thinking, sarcasm, common sense, politics, the unpredictability of sports – the list goes on.

The current AI wave traces back to breakthroughs in machine learning in the 1980s and 90s, where algorithms “learn” from examples rather than step-by-step programming. This approach was inspired by human and animal cognition and the biological study of neural networks. A technique called deep learning then supercharged machine learning starting in the 2000s. This allowed AI to beat human champions at games like chess, Jeopardy and FIFA  which were previously thought to be pinnacle of human intelligence. Ok, we threw FIFA in there to make sure you’re still with us.

Today, AI is spreading into new industries, supported by harder, better, faster, stronger computer hardware like graphics processing units (GPUs) and open source machine learning projects. Google’s DeepMind is tackling challenges from soccer strategy (hello FIFA) to protein folding, with the ultimate goal of “helping society find answers to some of the world’s most pressing and fundamental scientific challenges”. Microsoft and Amazon are cramming more AI into their digital tools like Cortana, Bing and Alexa, while startups apply AI to increasingly diverse fields from medicine to robotics. Still, plenty of room for improvement remains. Many human skills are out of reach, like reasoning from little data or quickly adapting to new environments. Despite the hype, past evidence shows progress in AI isn’t guaranteed. AI pioneers suggest base level rethinking may be required to reach more human-like versatility and general intelligence.

New generative AI models such as Midjourney, Chat GPT, Claude and Stable Diffusion also raise red flags. While their ability to remix data into new text, code and images is super impressive, their output lacks human-level understanding – a hallmark of general intelligence. These models require massive training data sets which are costly, environmentally taxing to process and raise copyright issues – something that is forcing writers to strike in Hollywood and impacting the livelihoods of artists across the world. The platforms flaws also risk automating the perpetuation of biases, misinformation and disinformation at scale – something we as humans have worked hard to change. Tighter regulation may be warranted for high-stakes AI use cases with potential for harm.

In short, AI is full of exciting potential to enhance human capabilities. Yet the pitfalls if deployed recklessly without oversight are equally full. With care, wisdom and responsible development, AI can augment and enhance our abilities rather than replace them outright. But we must approach rising machine intelligence thoughtfully, not blindly buy into the buzz or ignore the increasingly obvious downsides.

The true AI revolution will come when we find the right balance between cutting-edge wizardry with ethical wisdom.