Forget fancy pants folks growing up in posh suburbs in the likes of London, New York, Shanghai, Tokyo or Dubai. People who change the world can come from anywhere, any background. Take local boy, Shane Legg, who was a student at Rotorua Lakes High School. He is incredibly low key and yet gave up his time to quietly meet up with me in London many years ago during which we discussed some of NZ's long-standing constraints.
As co-founder of DeepMind, Shane has just been selected by TIME magazine as one of the world's 100 most influential people in Artificial Intelligence. Deep Mind merged with Google Brain in April to form Google Deep Mind, of which Shane is chief. TIME's selection recognized "Shane Legg's research in the field of general artificial intelligence has a far longer horizon than the immediate one and is fundamental to understanding how much machines will be able to be truly equal, or even better, than humans"
In 2011, Shane estimated that there was a 50% chance that human-level machine intelligence would be created by 2028. Legg tells TIME he made this prediction more than 2 decades ago while working as a software engineer, after reading The Age of Spiritual Machines by Ray Kurzweil, and that he has yet to change his mind.
What Shane says should scare the crap out of everyone on the planet. “If AI is making the world a better and more ethical place, then that’s very exciting,” he says. “I think there are many problems in the world that could be helped by having an extremely capable & ethical intelligence system. The world could become a much, much better place.”
So the future depends on whether the machines are "ethical"? Is he saying machines could create a more moral world than humans have created? In 2001 A Space Odyssey, the AI machine HAL went mad and tried to kill the crew who were attempting to shut HAL down. Shane better get the coding which makes the machine "ethical" done right.
Sources
https://time.com/collection/time100-ai/6310659/shane-legg/
Professor Robert MacCulloch holds the Matthew S. Abel Chair of Macroeconomics at Auckland University. He has previously worked at the Reserve Bank, Oxford University, and the London School of Economics. He runs the blog Down to Earth Kiwi from where this article was sourced.
In 2011, Shane estimated that there was a 50% chance that human-level machine intelligence would be created by 2028. Legg tells TIME he made this prediction more than 2 decades ago while working as a software engineer, after reading The Age of Spiritual Machines by Ray Kurzweil, and that he has yet to change his mind.
What Shane says should scare the crap out of everyone on the planet. “If AI is making the world a better and more ethical place, then that’s very exciting,” he says. “I think there are many problems in the world that could be helped by having an extremely capable & ethical intelligence system. The world could become a much, much better place.”
So the future depends on whether the machines are "ethical"? Is he saying machines could create a more moral world than humans have created? In 2001 A Space Odyssey, the AI machine HAL went mad and tried to kill the crew who were attempting to shut HAL down. Shane better get the coding which makes the machine "ethical" done right.
Sources
https://time.com/collection/time100-ai/6310659/shane-legg/
Professor Robert MacCulloch holds the Matthew S. Abel Chair of Macroeconomics at Auckland University. He has previously worked at the Reserve Bank, Oxford University, and the London School of Economics. He runs the blog Down to Earth Kiwi from where this article was sourced.
5 comments:
Oh dear! The definition of ethical can span a whole spectrum of views and opinions and these days tends to the extreme end, leaving a lot of people defined as "unethical".
I don't want to be on the wrong end of an "intelligent" computer system which decides I'm no longer ethical and then coldly and calculatingly works out what the most efficient solution is to solve its problem.
Lock all my doors and windows and put the air-conditioning into polar mode!!
I like the healthy dose of scepticism. AI will mirror human behaviours and get similar outcomes or go rogue and anything could happen. The old adage about computing still applies; "Garbage in, garbage out."
Some people are excellent at what they do while others falsely think they are. Unfortunately, the difference doesn't become obvious before the outcomes do. I see the whole AI enterprise as a huge risk.
Besides, in today's Social Justice (I know best what you need) confines who would trust programmers or AI to deliver for mankind in general or will the algorithms deliver for the globalists?
MC
Thanks Robert. With his very low profile most Kiwis, like me, would have not heard of him.
It would be really interesting to learn a bit about what you discussed with him, in relation to NZs issues, if it is not confidential.
In my view, NZ's issues and future direction will not be resolved by our politicians and bureaucrats. I do not know how it could be accomplished but we need to somehow pull together "big picture " thinking people like Shane, Stephen Jennings etc. from a wide range of fields and get them to help, at least, brainstorm ideas to get NZ away from the edge of the financial cliff we are currently near to falling over and help get us heading in another direction --economically, culturally and yes ethically.
I am not suggesting yet another conference, but more a "back room" get together.
To DeeM - What should concern you from the information presented, is that Google have 'brought Deep Mind' and incorporated it into their domain.
Strongly suggests that American Computer Groups "want to dominate the AI Spectrum" - and who knows what 'dictates' they will create (under the guise of a document titled - Community Standards - just like they have now) - that will 'dictate the T &C's' for users to follow when accessing any Computer platform/system/ data storage domain etc.
I wonder, for future communication systems, if we should "go back to the future" and use Semaphore? Or maybe we could start writing letters again, thus reinvigorating the Postal Service!
I think DeeM may have found a weak point in the development. It seems that a human controller remains an imperative. Who would have thought?
Post a Comment
Thanks for engaging in the debate!
Because this is a public forum, we will only publish comments that are respectful and do NOT contain links to other sites. We appreciate your cooperation.