Tesla and SpaceX CEO Elon Musk has announced a new venture called Neuralink, a startup which aims to develop neural interface technologies that connect our brains to computers. Musk says it’s the best way to prevent an AI apocalypse, but it’s on this point that he’s gravely mistaken.As reported in The Wall Street Journal, the startup is still very much in its embryonic stages. The company, registered as a “medical research” firm, is seeking to pursue what Musk calls “neural lace” technologies, which presumably involve the implanting of tiny electrodes in the brain to create a connection with a computer. The resulting “direct cortical interface” could be used to upload or download thoughts to a computer, blurring the boundary between human and machine. Eventually, brain chips could be used to supplement and boost cognitive capacities, resulting in increased intelligence and memory. It’s super-futuristic stuff, to be sure—but not outside the realm of possibility.According to the WSJ, Musk is funding the startup and taking an active leadership role within the company. Several leading academics in the field have reportedly signed up to work at the firm, and Musk has apparently reached out to Founders Fund, an investment firm started by PayPal co-founder Peter Thiel. The Neuralink website currently consists of a logo on a single page, with an email address for those seeking employment. Late yesterday, Musk confirmed the existence of the startup via a tweet, adding that more details will appear next week via WaitBuyWhy, a site that conveys topics with simplistic stick figures.
the emergence of intelligences far beyond human seem inevitable
Quote from: KelvinZero on 03/29/2017 11:50 pmthe emergence of intelligences far beyond human seem inevitableA good friend of mine is a professor of computer science. He assures me that what you say is "inevitable" won't happen for a long time, if ever. SKYNET remains a fantasy for the foreseeable future.
What uses could this brain/machine interface have in spaceflight?
Quote from: missinglink on 03/30/2017 12:17 amQuote from: KelvinZero on 03/29/2017 11:50 pmthe emergence of intelligences far beyond human seem inevitableThere had been a general consensus that we were a decade away from a computer beating top go players, then in 6 months (Oct 2015-March 2016) we went from first computer beating a pro go player, to computer beating one of the top go players in the world. Things are moving really fast, and it is impossible to accurately predict when the tipping point will be. I think it will be a while still, but it could just be 5 or 10 years away.
Quote from: KelvinZero on 03/29/2017 11:50 pmthe emergence of intelligences far beyond human seem inevitableThere had been a general consensus that we were a decade away from a computer beating top go players, then in 6 months (Oct 2015-March 2016) we went from first computer beating a pro go player, to computer beating one of the top go players in the world. Things are moving really fast, and it is impossible to accurately predict when the tipping point will be. I think it will be a while still, but it could just be 5 or 10 years away.
Quote from: meberbs on 03/30/2017 05:51 amQuote from: missinglink on 03/30/2017 12:17 amQuote from: KelvinZero on 03/29/2017 11:50 pmthe emergence of intelligences far beyond human seem inevitableThere had been a general consensus that we were a decade away from a computer beating top go players, then in 6 months (Oct 2015-March 2016) we went from first computer beating a pro go player, to computer beating one of the top go players in the world. Things are moving really fast, and it is impossible to accurately predict when the tipping point will be. I think it will be a while still, but it could just be 5 or 10 years away.The problem is that when it happens, it will be exponential, so you might not get an early warning...The only spaceflight-related thing I can see in this thread is that we need to push forward space colonization before this happens, to have a chance to survive.
Quote from: IRobot on 03/30/2017 12:16 pmQuote from: meberbs on 03/30/2017 05:51 amQuote from: missinglink on 03/30/2017 12:17 amQuote from: KelvinZero on 03/29/2017 11:50 pmthe emergence of intelligences far beyond human seem inevitableThere had been a general consensus that we were a decade away from a computer beating top go players, then in 6 months (Oct 2015-March 2016) we went from first computer beating a pro go player, to computer beating one of the top go players in the world. Things are moving really fast, and it is impossible to accurately predict when the tipping point will be. I think it will be a while still, but it could just be 5 or 10 years away.The problem is that when it happens, it will be exponential, so you might not get an early warning...The only spaceflight-related thing I can see in this thread is that we need to push forward space colonization before this happens, to have a chance to survive.With AI travelling at light speed and humans on another planet with no biosphere highly dependent on technology, how does being multiplanetary protect you?
Quote from: missinglink on 03/30/2017 12:17 amQuote from: KelvinZero on 03/29/2017 11:50 pmthe emergence of intelligences far beyond human seem inevitableA good friend of mine is a professor of computer science. He assures me that what you say is "inevitable" won't happen for a long time, if ever. SKYNET remains a fantasy for the foreseeable future.* Lots of very bright people do think it is an issue. http://time.com/3614349/artificial-intelligence-singularity-stephen-hawking-elon-musk/* It still concerns me even if it takes a thousand years. The timeframe is just not part of my argument.* The specific strange possibility of it never happening, of us being the endpoint of intelligence, is something I specifically mentioned, because in that case I am trivially correct in my original point.. that the author of the article was incorrect.
AI sounds magical until you study it at university and you realize it's just glorified statistics.
Quote from: high road on 03/30/2017 03:14 pmQuote from: IRobot on 03/30/2017 12:16 pmQuote from: meberbs on 03/30/2017 05:51 amQuote from: missinglink on 03/30/2017 12:17 amQuote from: KelvinZero on 03/29/2017 11:50 pmthe emergence of intelligences far beyond human seem inevitableThere had been a general consensus that we were a decade away from a computer beating top go players, then in 6 months (Oct 2015-March 2016) we went from first computer beating a pro go player, to computer beating one of the top go players in the world. Things are moving really fast, and it is impossible to accurately predict when the tipping point will be. I think it will be a while still, but it could just be 5 or 10 years away.The problem is that when it happens, it will be exponential, so you might not get an early warning...The only spaceflight-related thing I can see in this thread is that we need to push forward space colonization before this happens, to have a chance to survive.With AI travelling at light speed and humans on another planet with no biosphere highly dependent on technology, how does being multiplanetary protect you?Yes, any space colony is going to have the latest in technology, so that's not going to help.Quote from: KelvinZero on 03/30/2017 09:58 amQuote from: missinglink on 03/30/2017 12:17 amQuote from: KelvinZero on 03/29/2017 11:50 pmthe emergence of intelligences far beyond human seem inevitableA good friend of mine is a professor of computer science. He assures me that what you say is "inevitable" won't happen for a long time, if ever. SKYNET remains a fantasy for the foreseeable future.* Lots of very bright people do think it is an issue. http://time.com/3614349/artificial-intelligence-singularity-stephen-hawking-elon-musk/* It still concerns me even if it takes a thousand years. The timeframe is just not part of my argument.* The specific strange possibility of it never happening, of us being the endpoint of intelligence, is something I specifically mentioned, because in that case I am trivially correct in my original point.. that the author of the article was incorrect.It is a concern as technology progresses and it's good to discuss such things so you don't get totally blindsided if the Singularity becomes a possibility.Timeframe is really decades or centuries if ever. Simple digital electronics isn't going to produce SF level AI. We don't even understand how the human brain works, so it's highly unlikely we would accidently cause the Singularity.Quote from: Oli on 03/30/2017 02:36 pmAI sounds magical until you study it at university and you realize it's just glorified statistics.Agreed.I wonder if Elon would still think Neuralink will save humanity if he was to go watch "Ghost in the Shell" this weekend.
I am sure he's already seen the anime & how is that an argument against this.
Quote from: Star One on 03/30/2017 04:07 pmI am sure he's already seen the anime & how is that an argument against this.Any technology can lead to a dystopian future. If you buy into the hype, like Musk, he's trading one possible dark future for another.Developing direct brain to computer interfaces is a good idea, but doing it to prevent an AI apocalypse is kinda crazy and shows a lack of understanding of the technology.
Quote from: Oli on 03/30/2017 02:36 pmAI sounds magical until you study it at university and you realize it's just glorified statistics.Agreed.I wonder if Elon would still think Neuralink will save humanity if he was to go watch "Ghost in the Shell" this weekend.
Any technology can lead to a dystopian future. If you buy into the hype, like Musk, he's trading one possible dark future for another.Developing direct brain to computer interfaces is a good idea, but doing it to prevent an AI apocalypse is kinda crazy and shows a lack of understanding of the technology.
Quote from: RonM on 03/30/2017 04:19 pmQuote from: Star One on 03/30/2017 04:07 pmI am sure he's already seen the anime & how is that an argument against this.Any technology can lead to a dystopian future. If you buy into the hype, like Musk, he's trading one possible dark future for another.Developing direct brain to computer interfaces is a good idea, but doing it to prevent an AI apocalypse is kinda crazy and shows a lack of understanding of the technology.Does the imagined reason matter though if it helps develop the technology?
Could you expand a little on how it "shows a lack of understanding of the technology"?