It’s all go in the tech world. Facebook has launched its own crypto-currency and the original and best – Bitcoin – is back in the game again, shooting up in value. The USA is busy developing an AI-baffling military camouflage that taps into the murky world of adversarial imagery. And the digital doctor is ready for you. Here’s the news.
Facebook’s new crypto-currency – Is Libra safer than banks?
Leaving aside the obscene amount of electricity required to mine crypto-currencies and keep the blockchain alive, it’s exciting to see digital money coming back into fashion. Bitcoin prices go up and down like a particularly lively yo-yo, and that makes them the darling of the risk-takers amongst us. Vast sums can be made, but can also be lost in the blink of an eye. Now Facebook has joined the party with their own digital currency, Libra.
It might sound like a sanitary product for women, but Libra – due to launch in 2020 – is Facebook’s contribution to the crypto-currency world. It’ll be built into the network, designed to provide integrated payments, and you’ll be able to send money for ‘no or low cost’, just paying a small commission.
Visa and Mastercard both support the new currency, as do Uber, Spotify and Ebay, and that’s big news. This is a fresh departure for Facebook, and the very fact that the network is getting into crypto-currency is re-booting interest in blockchain technology as a whole.
The only concern is that Facebook is already deeply intertwined with our lives. Do we really want a social network that already collects data about us willy nilly, often without our express permission, to know all about our finances? Should Facebook be allowed to act like a bank? Once you start using Libra, will it make it even harder to leave the platform?
No wonder the Bank of England Governor Mark Carney cautions we should make sure Facebook focuses on regulatory oversight, data protection and user privacy. On the other hand traditional banks are not exactly known for taking care of consumers or managing money. They trashed the world’s economy in 2008, and they’ve carried on as normal ever since. So who’s the real baddie?
The digital arms race – New camouflage designed to trick enemy AIs
There’s a lot of tosh talked about Artificial intelligence. It isn’t actually very intelligent, it’s mostly a bit thick. AI is particularly bad at recognising images, as evidenced by the software being designed for driverless cars. Show it an image of an object from an unusual angle – an ‘adversarial example’ – and it rarely has the faintest clue what it’s looking at.
On the other hand fails like this can be handy. The USA is busy taking advantage of image recognition’s Achilles heel, awarding contacts to three companies that’ll design ‘foolkits’ to camouflage military equipment so it can’t be spotted by enemy AIs. Nope, it isn’t really a tank. It’s a Fiat Panda / giraffe / cheesecake.
Forget the doctor – The algorithm will see you now…
Algorithms trained on vast amounts of data are already carrying out medical diagnoses a lot more accurately than human doctors. Thousands of us have already been screened by an AI system designed to detect blindness related to diabetes, one that consistently out-performs human doctors. And there are more new AI-led diagnostic systems in the pipeline.
So far so good. The problem is, most data-sets contain inherent biases and those biases are passed on to the algorithm being trained. Did you know that almost every new drug or medicine is still tested on groups of people with a gender bias? We end up knowing all about the effects of a new drug on males, but very little about its impact on females. This has been going on for decades, and it’s the kind of bias that can easily make its way into a data-fed algorithm or AI.
There’s more. Medical regulators seem to be applying lower standards to algorithms than they do to things like drugs and devices because they feel they’re less risky. And other, more sinister problems have arisen. Take breast cancer screening. One study in 2013 discovered that for every woman whose life was saved thanks to screening, ten women had totally unnecessary treatment and two hundred women suffered years of stress for no reason. In this case over-diagnosis can be just as distressing as under-diagnosis. Where do we draw the line?
Having said all that, the long term future of AI based medical systems is looking bright. AI might just make medicine better by freeing doctors from having to examine vast amounts of data, while giving them more time to focus on good patient relationships and positive medical outcomes.