
Google’s Machine Learning Algorithms Outpacing Engineers’ Ability to Understand How they Work
“Google no longer understands how its “deep learning” decision-making computer systems have made themselves so good at recognizing things in photos.
What stunned [Google Software Engineer] Quoc V. Le is that the software has learned to pick out features in things like paper shredders that people can’t easily spot – you’ve seen one shredder, you’ve seen them all, practically. But not so for Google’s monster.
Many of Quoc’s pals had trouble identifying paper shredders when he showed them pictures of the machines, he said. The computer system has a greater success rate, and he isn’t quite sure how he could write a program to do this.
Google researchers can no longer explain exactly how the system has learned to spot certain objects, because the programming appears to think independently from its creators, and its complex cognitive processes are inscrutable. "
(via The Register ht algopop)
All those gigantic server farms… they’ve accidentally exceeded the critical synapse number, haven’t they.
Life is stirring in the depths…
AS AN ENGINEER THIS IS ACTUALLY VERY TERRIFYING, NO PROJECT SHOULD EVER EXCEED THE PARAMETERS OF OBSERVATION. WE ARE TERRIFIED OF THIS.
Othman used the instrument on Gulliman’s desk.
His fingers punched out the question with deft strokes: ‘Multivac, what do you yourself want more than anything else?’
The moment between question and answer lengthened unbearably, but neither Othman nor Gulliman breathed.
And there was a clicking and a card popped out. It was a small card. On it, in precise letters, was the answer:
‘I want to die.’
(via mindovermana)
