Following decades as an antagonist in sci-fi movies, artificial intelligence (AI) used in the context of replication (and ultimate surpassing) of human intelligence in a machine has finally come to mainstream possibility.
Over the years, machines have used specific artificial intelligence to perform repetitive and not very imaginative tasks, replacing generally lower skilled labour. This has often allowed labour to be re-employed in more “interesting” work, but this is nothing new.
Now, there is a new game in town: Deepmind is a UK company that was bought by Google in 2014 and has taken AI to a whole new level. AI has usually taken the approach of laborious coding of various commands, creating software which responds to different problems in a way that would give the illusion of artificial intelligence – but which ultimately was only very detailed, laboriously made software.
Deepmind uses a different approach. The company created various algorithms that allowed its AI to essentially learn like a human brain, trying to make sense of tons of non-specific data to accomplish a particular objective. This means that the AI is able to self-learn, without the need for extra coding. The main accomplishment so far involved Deepmind algorithms playing various games from the 70s and 80s, and – without specific programming – self-learning to get better at them, even surpassing the best human players. Most recently, the AI has beaten the European GO champion.
It’s all very interesting stuff and definitely something to keep an eye on, as this technology’s application can be limitless.