It won’t surprise anyone who doesn’t live in a cave without an internet connection (caves with internet connections in London rent for remarkable sums) that the term ‘fake news’ was selected as 2017’s ‘Word of the Year’ by the expert but apparently innumerate etymologists of the Collin’s Dictionary. In 2017, the phrase’s use increased by more than 350% so it managed to beat other worthy contenders to the presumably coveted title, including ‘gig economy’, ‘gender-fluid’ and ‘echo-chamber’.
One of the enabling forces behind the rise of believable but untrue information posing as news has been the development of competent artificial intelligence. Whether you prefer your AI weak or strong, narrow or wide, you may well have read something that was either planned, produced or promoted by what is either a world-saving or a potentially cataclysmic technology, depending on how you view a glass with some water in it.
But if AI can make supposedly intelligent people believe that what they are reading is true, what if its application in other disciplines could have similarly disquieting consequences?
AI, ‘machine learning’ and ‘deep learning’ are the technological terms du jour and the geospatial industry has not escaped the rise of the robots. Many new start-ups and established companies are exploiting the affordable availability of powerful processing in the cloud, to provide improved and often brand-new products.
The sheer size of geospatial vector, raster and point cloud data makes them perfect fodder for the voracious appetite of computer-based learning algorithms. Around 440 million individual landmarks and features have been assigned topographical identifiers in the UK alone. When Landsat satellite images were joined together to provide a single picture of the world’s land mass, it included 3.1 trillion pixels!
Processing these kinds of numbers would take some time by hand, so a computer that can teach itself how to make better end-products is the perfect employee. Spotting patterns or connections that have never been seen or even thought of before could generate completely new insights into how the physical and human worlds can co-exist. Traffic will ceaselessly flow, people will live in harmony, and all will be good in the land of milk and honey.
But if AI can be used to influence how people think through the dissemination of fake news, what could a ‘bad actor’ do with the ability to create ‘fake geography’? I sincerely hope that that term won’t be challenging for the Collin’s title next year, as it would mean serious problems for everyone.
Location and the technology that accurately provides it have become fundamental to the running of our lives. Without it, so many services would fail that even the most civilised of countries – yes, I’m looking at you Scandinavia – would become lawless within the month.
That’s a ‘fail-scenario’, as I’m sure they would say in action movies. But what if worldwide location technology was seemingly working perfectly, all the while acting in someone’s or its own interests?
Elon Musk has argued that “by the time we are reactive in AI regulation, it will be too late.” It may be that by the time the people tasked with keeping us safe recognise a threat, they may well be too late. Or even somewhere else.
Few wars are fought over something other than the ownership of land or sea. How quickly would one start if people believed that someone had moved borders? The original platinum bar defining the length of a metre is still preserved in Paris – will we need a similarly analogue cartographic equivalent in the future, just in case?
Ultimately, we may not need to worry about any of this if the algorithms designed to create a better world decide that, given the impact we have had on the environment they were born to protect, the only reasonable course of action to fulfil their reason for being is to stop us being here at all.
Still, Happy New Year!
Alistair Maclenan is founder of the geospatial B2B marketing agency Quarry One Eleven (www.quarry-one-eleven.com)