You might have noticed lot of excitement around AI at the moment, with the likes of Google, Facebook, Microsoft, and Amazon all vying to be the loudest voice promoting the newest buzzword. In some cases it is even being heralded as the Fourth Industrial Revolution. With AI-powered Go and chess champions, it seems like ‘the singularity’ is fast approaching.
Even Apple’s recent WWDC keynotes saw software chief Craig Federighi casually announce new Core ML APIs aimed at attracting AI-conscious coders, APIs that will increase the use of things like facial recognition and semantic text cognition in future Apple products.
To be clear though, AI isn’t quite yet the stuff of science fiction – although SkyNet may actually exist, what we have today doesn’t resemble the Terminators, or even 2001: A Space Odyssey’s Hal, luckily for us all.
What we do have is machine learning and neural networks that can be ‘trained’ with vast data inputs to recognise patterns and draw parallels in specific areas, but these systems almost always operate in conjunction with a human operator, rather than solo and sentient. Full Hollywood-style robot AI that can relate to any human experience is some way off, but there is plenty of it in development, advancing all the time, and in places you might not automatically expect.
Your local cop
Just as Minority Report explored the themes of futuristic crime-fighting, free will, determinism and RSI-free desktops, so the UK's Durham police force have stepped up with their vision of the future of law enforcement.
Dubbed HART, (Harm Assessment Risk Tool), the AI pores over five years of data from Durham police in order to decide whether a suspect has a low, medium or high risk of offending.
In testing the tool has been found to be accurate at least some of the time – forecasts that a suspect was low risk turned out to be accurate in 98% of cases, while forecasts that they were high risk were accurate 88% of the time.
Meanwhile Middlesex University London are also working with the police force to trial a system called VALCRI, designed to do the data-crunching part of the job of an analyst at a crime scene. VALCRI scans millions of police records, interviews, pictures, videos hunting for connections between cases, presenting the results on two large touchscreens for a human analyst (sadly not played by Tom Cruise) to interact with.
The road to fully-automated AI control of your wallet is likely to be a long one, but even today there is a considerable amount of semi-autonomous AI tech operating on the open markets.
A startup called Sentient Technologies Inc has spent nearly a decade training an AI programme to sift the millions of market data points in order to flag the next big thing.
Of course, it’s not just the startups – the world’s biggest hedge fund group, Bridgewater, recruited the former head of IBM’s artificial intelligence unit Watson way back in 2012, and in 2015 both BlackRock and Two Sigma headhunted top Google engineers.
In many ways this is no surprise – it’s thought that around 1,360 hedge funds rely on sophisticated computer modelling in order to manage a total value of more than $197 billion dollars – the traditional “quant” (quantitative) funds.
However, the neural networks of the 90s, which themselves gave birth to programmatic trading, are now on the cusp of being superseded by today’s much more powerful AI machines. The ethical questions of what to do if – and more likely when – a ‘stockmarket singularity’ occurs are already being hotly debated. Even the most bullish have been cowed by some of the teething problems automated trading tech has faced however – in one 2012 incident a company called Knight Capital found themselves down by $440 million after an automated trading programme ran amok.
Apple CEO Tim Cook threw down the gauntlet to competitors at the WWDC announcement of the HomePod, saying that although there are many home music products on the market, “none have nailed it yet”. It’s undeniably a big market though – globally, Consumer Intelligence Research Partners (CIRP) estimates that over 8 million customers have bought Amazon Echo devices up to the start of January 2017; and at the end of the first quarter of 2017 there were over 10,000 ‘skills’ available for use with Alexa – a sharp 100% increase on the last quarter of 2016.
However, much as we’d like this to be, Alexa and Siri aren’t yet really ‘full’ AI. Chris Mitchell, CEO, Audio Analytic, explains: “Although this technology can convert speech into search strings and pick up on certain words, the impetus is still on us to learn how to ask for different tasks.
"It’s an exciting way for consumers to interact with products, but there’s a much brighter future ahead. When we look back in a few years’ time we’ll see this as the entry point for what happens next, which will be true AI. We’ve currently got the first generation, which is the connected home, where devices talk to each other under our command. The future, however, is the intelligent home. This is where products are context aware through human-like senses such as hearing, and empowered to react to what is happening around us for our benefit.”
Your work computer
If you work at a large company, chances are your computer at work is being invisibly protected by AI.
The IT security industry has spent big on AI and machine learning in recent years, and there are a host of companies that claim to use it everyday. Anti-malware vendor Sophos recently acquired Invincea for up to $120m to add in AI/Machine Learning smarts to its platform, intended to spot previously unknown Zero Day threats.
Security is a particularly fertile ground for machine learning (ML) technology, as the volume of threats increases exponentially, so businesses have begun to turn to more automated methods of filtering them and providing initial analysis before requiring human intervention.
Of course, you should be grateful that you still have a job – recent research indicated that there is a 50% chance of machines will outperforming humans in all tasks within 45 years. The robots will be better than us at translating languages by 2024, writing high-school essays by 2026, driving a truck by 2027, working in retail by 2031, writing a bestselling book by 2049 and surgery by 2053. In fact, all human jobs will be automated within the next 120 years, according to experts working in the field.
Your internet browsing
Although internet browsing through a search engine and online shopping are two of the most advanced everyday examples of AI and machine learning impacting the products you buy and the thoughts you have, they also demonstrate clearly where the line between man and machine currently falls.
Earlier this year, responding to criticism over hate speech and extremism, Google deployed an army of 10,000 human ‘quality rater’ contractors to check through the search giant’s algorithmic results for “offensive or upsetting” content. A specific example from their training manual was a search for “did the Holocaust happen” that returned a link to the white supremacist holocaust denial forum Stormfront as the top result. However, the raters’ rankings do not directly impact search results, but are used to train and assess the company’s machine-learning systems.
Rachel Jones, Senior Strategy Designer at Hitachi Europe, commented: “AI and machine learning impact us today mainly in the consumer space, such as in advertising on Google and Facebook. This may seem to be insignificant, but think how it has influenced UK and US politics.
"Looking to the future, AI and machine learning techniques will become part of our everyday infrastructure, improving our services, whether that is being able to predict maintenance on our railways, provide travel schedules that vary for the number passengers that need transport, the way supply chains operate in manufacturing, and the way energy is distributed and used.”
Your health and fitness
With surgeons set to lose their jobs in 37 years, it’s not surprising that health and fitness should be the subject of considerable AI R&D.
In the UK, a company called Babylon Health is trialling an AI-powered app until July 2017 in collaboration with the NHS. Users can submit their symptoms to the app to receive a recommended course of action, based on data from a combination of algorithms, clinicians and data analytics. The average time to complete an interaction on the slightly-ominously named Babylon is 12 minutes, considerably better than listening to the hold music on the NHS 112 dial in number.
On a more serious note, a company called Medtronic is using IBM’s Watson AI platform to develop what they claim to be the first ‘cognitive app’, Sugar.IQ, for diabetes sufferers. The app can apparently predict diabetic events three to four hours before they happen, with a 75% to 86% accuracy rate, according to a study by F5 Networks.
Overall, it’s fair to say that while full-fat AI is still some way off in the future, that’s lucky for us humans, as we’re clearly not quite ready to deal with the repercussions yet.
Jason Alan Snyder, Chief Technology Officer, Momentum Worldwide said: “The main concern of beneficial-AI is with intelligence: specifically, intelligence whose goals are misaligned with ours. Misaligned superhuman intelligence requires just an Internet connection to outsmart financial markets, out-invent human researchers, out-manipulate human leaders, and develop weapons we cannot begin to understand. So a super-intelligent and super-wealthy AI could easily pay or manipulate humans to unwittingly do its bidding.
“The Hollywood-style robot fantasy is part of the myth that machines can’t control humans. Intelligence enables control: humans control lions not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, it’s possible that we might also cede control.”
Maybe that deep-seated fear of the robots taking over might be justified. Maybe.