Skeptic Friends Network

Username:
Password:
Save Password
Forgot your Password?
Home | Forums | Active Topics | Active Polls | Register | FAQ | Contact Us  
  Connect: Chat | SFN Messenger | Buddy List | Members
Personalize: Profile | My Page | Forum Bookmarks  
 All Forums
 Our Skeptic Forums
 General Skepticism
 Strong AI and the Singularity
 New Topic  Reply to Topic
 Printer Friendly Bookmark this Topic BookMark Topic
Next Page
Author Previous Topic Topic Next Topic
Page: of 2

dv82matt
SFN Regular

760 Posts

Posted - 10/10/2007 :  08:33:30  Show Profile Send dv82matt a Private Message  Reply with Quote
I wonder what people here think of the transhumanist idea of strong AI, and the attendant scenario where a human equivalent AI can quickly bootstrap its intelligence far beyond the human level, which is an event known in transhumanist circles as the singularity.

Some arguments in favor of the plausibility of strong AI:
1. The human brain serves as a working example of human level intelligence. So we know that this much is at least possible though perhaps still not feasible.
2. If computing power continues to increase at the present rate then we should soon (perhaps in around a decade or so) see computers with processing power roughly equivalent to the human brain.
3. Most people engaged in the field are optimistic that human level intelligence can likely be engineered.
4. There is no reason to believe that humans represent the acme of what is possible for intelligence.

Arguments against:
1. Processing power alone is not sufficient for AI and figuring out a general purpose intelligence alogrithm may be too difficult for us to work out.
2. The track record for AI reasearch is poor.
3. Even if it is possible to develop human level AI we will probably choose not to because of fears of the threat it could pose to us.
4. Human intelligence may be very near the peak of what is possible. If so then although human level intelligence may be feasible such an intelligence would not be able to bootstrap itself to yet higher levels of intelligence.

As a follow up, if the singularity seems plausible what, if anything, should we do about it? Should we work to prevent it or encourage its arrival? Should we try to shape it so that the resulting AI is friendly to us or would that be a futile endeavor? Or should we simply ignore it or take a wait and see approach?

Any takers?

Dr. Mabuse
Septic Fiend

Sweden
9688 Posts

Posted - 10/10/2007 :  14:07:28   [Permalink]  Show Profile  Send Dr. Mabuse an ICQ Message Send Dr. Mabuse a Private Message  Reply with Quote
I recall we were discussing in a thread a number of years ago how long it would take, given Moore's LAW applied to computer performace, until around 2024 before we had a super-computer powerful enough to run a simulation of human neurons with a quantity matching the human brain. I suppose that calculation didn't take into account any overhead for connectivity between the neurons, but that would be minor compared to the total need of computing power for the simulation of neurons themselves.

This simulation of the neuron would be a simulation of the processes within the neuron, and by interconnecting many such simulation entities would create an AI-net.
This would "solve" most of the current problems regarding AIs poor track record.
Building an AI not based on complete simulation of a human neuron would lower the computing power requirement, but how would such an AI develope?

Dr. Mabuse - "When the going gets tough, the tough get Duct-tape..."
Dr. Mabuse whisper.mp3

"Equivocation is not just a job, for a creationist it's a way of life..." Dr. Mabuse

Support American Troops in Iraq:
Send them unarmed civilians for target practice..
Collateralmurder.
Go to Top of Page

Ricky
SFN Die Hard

USA
4907 Posts

Posted - 10/10/2007 :  14:37:20   [Permalink]  Show Profile  Send Ricky an AOL message Send Ricky a Private Message  Reply with Quote
3. Most people engaged in the field are optimistic that human level intelligence can likely be engineered.


I'm optimistic that the goal I am working to reach is reachable, too.

2. The track record for AI reasearch is poor.


This one seems rather unfounded. By what do you mean, "poor"? There was an idea held back when computers first started to be popular that we would have a computer that you would not be able to distinguish from a human. Now if that's your definition of poor, then jet pack development has also been poor. However the field of AI has grown by leaps and bounds. Perhaps it hasn't held up to our dreams, but what has?

4. Human intelligence may be very near the peak of what is possible. If so then although human level intelligence may be feasible such an intelligence would not be able to bootstrap itself to yet higher levels of intelligence.


History argues otherwise.

Why continue? Because we must. Because we have the call. Because it is nobler to fight for rationality without winning than to give up in the face of continued defeats. Because whatever true progress humanity makes is through the rationality of the occasional individual and because any one individual we may win for the cause may do more for humanity than a hundred thousand who hug their superstitions to their breast.
- Isaac Asimov
Edited by - Ricky on 10/10/2007 14:39:03
Go to Top of Page

filthy
SFN Die Hard

USA
14408 Posts

Posted - 10/10/2007 :  15:12:30   [Permalink]  Show Profile Send filthy a Private Message  Reply with Quote
As a follow up, if the singularity seems plausible what, if anything, should we do about it? Should we work to prevent it or encourage its arrival? Should we try to shape it so that the resulting AI is friendly to us or would that be a futile endeavor? Or should we simply ignore it or take a wait and see approach?
Install a reliable, remote on/off switch. Then have at it.




"What luck for rulers that men do not think." -- Adolf Hitler (1889 - 1945)

"If only we could impeach on the basis of criminal stupidity, 90% of the Rethuglicans and half of the Democrats would be thrown out of office." ~~ P.Z. Myres


"The default position of human nature is to punch the other guy in the face and take his stuff." ~~ Dude

Brother Boot Knife of Warm Humanitarianism,

and Crypto-Communist!

Go to Top of Page

dv82matt
SFN Regular

760 Posts

Posted - 10/11/2007 :  04:08:51   [Permalink]  Show Profile Send dv82matt a Private Message  Reply with Quote
Originally posted by Ricky

3. Most people engaged in the field are optimistic that human level intelligence can likely be engineered.


I'm optimistic that the goal I am working to reach is reachable, too.

2. The track record for AI reasearch is poor.


This one seems rather unfounded. By what do you mean, "poor"? There was an idea held back when computers first started to be popular that we would have a computer that you would not be able to distinguish from a human. Now if that's your definition of poor, then jet pack development has also been poor. However the field of AI has grown by leaps and bounds. Perhaps it hasn't held up to our dreams, but what has?

I agree that these are not excellent arguments. They are fairly common ones though. And they tend to cancel each other out anyway. And and yes by "poor" I mean it as you have suggested with the jetpack analogy.

4. Human intelligence may be very near the peak of what is possible. If so then although human level intelligence may be feasible such an intelligence would not be able to bootstrap itself to yet higher levels of intelligence.


History argues otherwise.
How so?
Go to Top of Page

dv82matt
SFN Regular

760 Posts

Posted - 10/11/2007 :  04:32:41   [Permalink]  Show Profile Send dv82matt a Private Message  Reply with Quote
Originally posted by Dr. Mabuse

I recall we were discussing in a thread a number of years ago how long it would take, given Moore's LAW applied to computer performace, until around 2024 before we had a super-computer powerful enough to run a simulation of human neurons with a quantity matching the human brain. I suppose that calculation didn't take into account any overhead for connectivity between the neurons, but that would be minor compared to the total need of computing power for the simulation of neurons themselves.

This simulation of the neuron would be a simulation of the processes within the neuron, and by interconnecting many such simulation entities would create an AI-net.
This would "solve" most of the current problems regarding AIs poor track record.
Building an AI not based on complete simulation of a human neuron would lower the computing power requirement, but how would such an AI develope?
Well they have simulated half a mouse brain but it is not clear that they have created a mouse like intelligence. So there may be more to it than just joining up as many simulated neurons as possible.

It's probably the case that it would be easier to develop intelligence that is not based on the neuron. Biological structures tend to be more complicated than they strictly need to be and difficult to analyze.

Consider the oft used analogy to powered flight. Before the Wright Brothers came along it was obvious that powered flight was possible because you could point out that birds and insects were already doing it. But humans achieved flight in a way that was different from the ways birds or insects achieved it. And we still haven't achieved bumblebee-like powered flight.
Go to Top of Page

dv82matt
SFN Regular

760 Posts

Posted - 10/11/2007 :  04:46:18   [Permalink]  Show Profile Send dv82matt a Private Message  Reply with Quote
Originally posted by filthy
Install a reliable, remote on/off switch. Then have at it.
Well that would only work reliably if the AI were confined to a single isolated macine. And even then only if that machine is within our control, (as opposed to being in a rogue state or hidden in some genious's basement).
Go to Top of Page

astropin
SFN Regular

USA
970 Posts

Posted - 10/11/2007 :  10:51:23   [Permalink]  Show Profile Send astropin a Private Message  Reply with Quote
I Believe A.I. will happen at the level of human intelligence within my lifetime (I'm 40). I also think it will then skyrocket right past us. I think that calling A.I. that has left biological humans in the dust a "singularity" is a bit strange. I also don't think we can stop it.....not without destroying civilization as we know it in the process. It will either be a fantastic voyage or a gigantic nightmare.......either way it's coming.

I would rather face a cold reality than delude myself with comforting fantasies.

You are free to believe what you want to believe and I am free to ridicule you for it.

Atheism:
The result of an unbiased and rational search for the truth.

Infinitus est numerus stultorum
Go to Top of Page

dv82matt
SFN Regular

760 Posts

Posted - 10/11/2007 :  20:29:21   [Permalink]  Show Profile Send dv82matt a Private Message  Reply with Quote
Originally posted by astropin

I Believe A.I. will happen at the level of human intelligence within my lifetime (I'm 40). I also think it will then skyrocket right past us. I think that calling A.I. that has left biological humans in the dust a "singularity" is a bit strange. I also don't think we can stop it.....not without destroying civilization as we know it in the process. It will either be a fantastic voyage or a gigantic nightmare.......either way it's coming.
If you are right then it seems fair to say that the singularity will be one of the most significant events in the history of humanity. Do you think it is possible for us to improve the chances of a positive outcome or are you pretty fatalistic about it?
Go to Top of Page

chaloobi
SFN Regular

1620 Posts

Posted - 10/12/2007 :  09:49:20   [Permalink]  Show Profile  Send chaloobi a Yahoo! Message Send chaloobi a Private Message  Reply with Quote
Originally posted by dv82matt
As a follow up, if the singularity seems plausible what, if anything, should we do about it? Should we work to prevent it or encourage its arrival? Should we try to shape it so that the resulting AI is friendly to us or would that be a futile endeavor? Or should we simply ignore it or take a wait and see approach?
We should do it. Why not? Human extinction? Bah, humans will go extinct eventually anyway - the vast majority of all species that have existed are extinct. Humanity will be no exception - these days the relevant question may be how many other species will we end up taking with us. As far as shaping the AI is concerned, I think it would be stupidly negligent not to attempt to make it benign to humanity.

-Chaloobi

Go to Top of Page

astropin
SFN Regular

USA
970 Posts

Posted - 10/12/2007 :  12:00:27   [Permalink]  Show Profile Send astropin a Private Message  Reply with Quote
Originally posted by dv82matt
If you are right then it seems fair to say that the singularity will be one of the most significant events in the history of humanity. Do you think it is possible for us to improve the chances of a positive outcome or are you pretty fatalistic about it?


I wouldn't say I'm overly fatalistic about it.....but if we do destroy ourselves I think it would occur before this so called "singularity" event takes place. Nanotechnology will take off before then and I think it will be the early years of "nanobot" development that will be the most precarious time for us. This is a subject I go back and fourth on to some degree. Still to many "unknowns". I just happen to agree with Kurzweil's theory of technological advancement. My only hang up is that absolute processing power does not = intelligence. It's the "software" end of things that could hold things up. Still sooner or later a breakthrough will happen. The later it happens the faster things will change. That might sound backwards at first but think about it. From our perspective anyway, if we have computers that are capable of far more processing power then any human mind just waiting to be unleashed with the proper software to become sentient....imagine how fast things will change when they do! Either way I think it's going to happen far faster than most people anticipate....if they have anticipated it at all. Will it destroy us or turn us into virtual immortals? Only time will tell. Being a bit of an optimist I give us about a 50/50 shot at making it through.

I would rather face a cold reality than delude myself with comforting fantasies.

You are free to believe what you want to believe and I am free to ridicule you for it.

Atheism:
The result of an unbiased and rational search for the truth.

Infinitus est numerus stultorum
Go to Top of Page

Valiant Dancer
Forum Goalie

USA
4826 Posts

Posted - 10/12/2007 :  17:25:37   [Permalink]  Show Profile  Visit Valiant Dancer's Homepage Send Valiant Dancer a Private Message  Reply with Quote
Originally posted by filthy

As a follow up, if the singularity seems plausible what, if anything, should we do about it? Should we work to prevent it or encourage its arrival? Should we try to shape it so that the resulting AI is friendly to us or would that be a futile endeavor? Or should we simply ignore it or take a wait and see approach?
Install a reliable, remote on/off switch. Then have at it.






Ah, learned from the M5 debacle, have we.

Cthulhu/Asmodeus when you're tired of voting for the lesser of two evils

Brother Cutlass of Reasoned Discussion
Go to Top of Page

Ricky
SFN Die Hard

USA
4907 Posts

Posted - 10/12/2007 :  23:09:18   [Permalink]  Show Profile  Send Ricky an AOL message Send Ricky a Private Message  Reply with Quote
Originally posted by dv82matt

History argues otherwise.
How so?


The level of intelligence has been increasing ever since the end of the Dark Ages. It has not stopped yet, and if we are to extrapolate from the past, it will continue to increase in the future. To suggest that we are at the height of human intelligence now strikes me as a very ill conceived notion.

Why continue? Because we must. Because we have the call. Because it is nobler to fight for rationality without winning than to give up in the face of continued defeats. Because whatever true progress humanity makes is through the rationality of the occasional individual and because any one individual we may win for the cause may do more for humanity than a hundred thousand who hug their superstitions to their breast.
- Isaac Asimov
Go to Top of Page

H. Humbert
SFN Die Hard

USA
4574 Posts

Posted - 10/12/2007 :  23:32:44   [Permalink]  Show Profile Send H. Humbert a Private Message  Reply with Quote
Originally posted by Ricky
The level of intelligence has been increasing ever since the end of the Dark Ages. It has not stopped yet, and if we are to extrapolate from the past, it will continue to increase in the future. To suggest that we are at the height of human intelligence now strikes me as a very ill conceived notion.
Our knowledge has been increasing for centuries. However, there is little evidence that raw intelligence--pure processing power--of homo sapiens has been increasing since our most distant direct ancestors.

However, if we look at all life, then intelligence has increased from species to species over time, relatively speaking. It seems unlikely that human beings represent the pinnacle of that particular selected trait.


"A man is his own easiest dupe, for what he wishes to be true he generally believes to be true." --Demosthenes

"The first principle is that you must not fool yourself - and you are the easiest person to fool." --Richard P. Feynman

"Face facts with dignity." --found inside a fortune cookie
Edited by - H. Humbert on 10/12/2007 23:35:17
Go to Top of Page

Ricky
SFN Die Hard

USA
4907 Posts

Posted - 10/13/2007 :  01:42:46   [Permalink]  Show Profile  Send Ricky an AOL message Send Ricky a Private Message  Reply with Quote
That is a point I considered upon writing my previous post. However, I am unsure of how one can differentiate between knowledge and intelligence. Not only that, but I have absolutely no idea how you can measure intelligence. Any suggestions?

Why continue? Because we must. Because we have the call. Because it is nobler to fight for rationality without winning than to give up in the face of continued defeats. Because whatever true progress humanity makes is through the rationality of the occasional individual and because any one individual we may win for the cause may do more for humanity than a hundred thousand who hug their superstitions to their breast.
- Isaac Asimov
Go to Top of Page

dv82matt
SFN Regular

760 Posts

Posted - 10/13/2007 :  07:14:53   [Permalink]  Show Profile Send dv82matt a Private Message  Reply with Quote
Originally posted by chaloobi
We should do it. Why not? Human extinction? Bah, humans will go extinct eventually anyway - the vast majority of all species that have existed are extinct. Humanity will be no exception - these days the relevant question may be how many other species will we end up taking with us.
Kind of a, "make humanity's existence count for something" argument. I agree though I'm a bit less fatalistic about humanities survival chances.

As far as shaping the AI is concerned, I think it would be stupidly negligent not to attempt to make it benign to humanity.
That naturally brings up the question of how we can best do that.

Does what we decide to use an AI for have an effect on its friendliness? This is something I have been speculating on. For example, assuming an AI that is capable of wresting control of society away from us if it so desires, is an AI that is designed to control tanks, and figher jets, and submarines to wage a war inherently more dangerous to us than an equally capable AI that is designed to control dump trucks, and passenger jets and cruise ships?

Originally posted by astropin
I wouldn't say I'm overly fatalistic about it.....but if we do destroy ourselves I think it would occur before this so called "singularity" event takes place. Nanotechnology will take off before then and I think it will be the early years of "nanobot" development that will be the most precarious time for us.
Why do you think that nanotechnology will take off before Strong AI is developed? Would deliberately slowing some types of nanotechnology research until a strong AI is developed be likely to improve our survival chances do you think?

This is a subject I go back and fourth on to some degree. Still to many "unknowns". I just happen to agree with Kurzweil's theory of technological advancement. My only hang up is that absolute processing power does not = intelligence. It's the "software" end of things that could hold things up. Still sooner or later a breakthrough will happen. The later it happens the faster things will change. That might sound backwards at first but think about it. From our perspective anyway, if we have computers that are capable of far more processing power then any human mind just waiting to be unleashed with the proper software to become sentient....imagine how fast things will change when they do!
Makes sense to me.

Either way I think it's going to happen far faster than most people anticipate....if they have anticipated it at all. Will it destroy us or turn us into virtual immortals? Only time will tell. Being a bit of an optimist I give us about a 50/50 shot at making it through.
Yeah, I agree. I also think this is something we should try to anticipate to the degree that we can as it could spell the difference between survival and extinction.
Go to Top of Page
Page: of 2 Previous Topic Topic Next Topic  
Next Page
 New Topic  Reply to Topic
 Printer Friendly Bookmark this Topic BookMark Topic
Jump To:

The mission of the Skeptic Friends Network is to promote skepticism, critical thinking, science and logic as the best methods for evaluating all claims of fact, and we invite active participation by our members to create a skeptical community with a wide variety of viewpoints and expertise.


Home | Skeptic Forums | Skeptic Summary | The Kil Report | Creation/Evolution | Rationally Speaking | Skeptillaneous | About Skepticism | Fan Mail | Claims List | Calendar & Events | Skeptic Links | Book Reviews | Gift Shop | SFN on Facebook | Staff | Contact Us

Skeptic Friends Network
© 2008 Skeptic Friends Network Go To Top Of Page
This page was generated in 0.11 seconds.
Powered by @tomic Studio
Snitz Forums 2000