|
|
Ricky
SFN Die Hard
USA
4907 Posts |
|
BigPapaSmurf
SFN Die Hard
3192 Posts |
Posted - 09/07/2004 : 10:02:59 [Permalink]
|
Amen brother! Really we need to do the semantics battle before we get into this discussion.
Ill start and you folks can modify the discription from there.
The ability for a program to learn from mistakes not pre-programmed into the machine.
Man this is harder than I thought. Really I think It will need basic goals or laws by which to live like robocop. Example: Avoid things which may cause damage/If you sustain damage avoid that which damaged you. |
"...things I have neither seen nor experienced nor heard tell of from anybody else; things, what is more, that do not in fact exist and could not ever exist at all. So my readers must not believe a word I say." -Lucian on his book True History
"...They accept such things on faith alone, without any evidence. So if a fraudulent and cunning person who knows how to take advantage of a situation comes among them, he can make himself rich in a short time." -Lucian critical of early Christians c.166 AD From his book, De Morte Peregrini |
|
|
Valiant Dancer
Forum Goalie
USA
4826 Posts |
Posted - 09/07/2004 : 12:34:29 [Permalink]
|
quote: Originally posted by chaloobi
The words "inevitable" and "impossible" are two that one would be wise to stay away from when discussing the future. I think the latter is the one more dangerous. I feel very confident in predicting neither is an accurate descriptor of the potential for AI.
When dealing with the extremely complex nature of human abstract thought, the limitiations of knowledge in the area required to fabricate a program to handle every possibility, the group mechanics involved in producing such a program (the human condition is too complex for a single person to be able to fabricate), and the number of unknowns with no visible means of getting to a finished product, impossible is IMHO a valid descriptor, especially when computer programmers deal with concrete concepts when dealing with tasks. The entire non-concrete nature of abstract thought makes it impossible to program for. I have a more cynical view of humans. Computer programmers are temprimental. (I include myself in this.) |
Cthulhu/Asmodeus when you're tired of voting for the lesser of two evils
Brother Cutlass of Reasoned Discussion |
|
|
Maverick
Skeptic Friend
Sweden
385 Posts |
Posted - 09/07/2004 : 12:47:27 [Permalink]
|
quote: Originally posted by Ricky
What I mean is true AI, the computer being able to make choices on its own. AI we have right now gets it from situational statments that a programmer put in:
If X happens, do Y If A happens, do B
For real AI, the computer would be able to decide how to react to situation X, or situation A, etc. As of right now, the computer is being "forced" to do something.
But there are methods already to build selflearning systems. They're not on our level of course and highly specialised. Something that I find interesting is both neural nets and genetic algorithms. Maybe with quantum computing we can build systems that are selflearning and can be selfaware similar to how we are? |
"Life is but a momentary glimpse of the wonder of this astonishing universe, and it is sad to see so many dreaming it away on spiritual fantasy." -- Carl Sagan |
|
|
Valiant Dancer
Forum Goalie
USA
4826 Posts |
Posted - 09/07/2004 : 12:57:45 [Permalink]
|
quote: Originally posted by BigPapaSmurf
Amen brother! Really we need to do the semantics battle before we get into this discussion.
Ill start and you folks can modify the discription from there.
The ability for a program to learn from mistakes not pre-programmed into the machine.
Man this is harder than I thought. Really I think It will need basic goals or laws by which to live like robocop. Example: Avoid things which may cause damage/If you sustain damage avoid that which damaged you.
The laws being the basic Asmovian Laws of Robotics, I am assuming. Requires computers (AI) to make moral decisions (abstract thought). |
Cthulhu/Asmodeus when you're tired of voting for the lesser of two evils
Brother Cutlass of Reasoned Discussion |
|
|
Ricky
SFN Die Hard
USA
4907 Posts |
|
Maverick
Skeptic Friend
Sweden
385 Posts |
Posted - 09/07/2004 : 13:25:20 [Permalink]
|
quote: Originally posted by Ricky
quote: But there are methods already to build selflearning systems. They're not on our level of course and highly specialised. Something that I find interesting is both neural nets and genetic algorithms. Maybe with quantum computing we can build systems that are selflearning and can be selfaware similar to how we are?
I know of things where computers can gather data, such as a user name, however the computer doesn't understand that data. It takes a programmer to tell the computer where to and where not to use that. This is something true AI can't have - a programmer. The machine must be able to learn itself.
Well a programmer has to build the actual system first, the framework, and then let it run by itself perhaps. But it still has to interact with other systems, maybe databases of knowledge, or through artificial senses interact with humans and the environment etc. |
"Life is but a momentary glimpse of the wonder of this astonishing universe, and it is sad to see so many dreaming it away on spiritual fantasy." -- Carl Sagan |
|
|
Ricky
SFN Die Hard
USA
4907 Posts |
|
Dave W.
Info Junkie
USA
26022 Posts |
Posted - 09/07/2004 : 18:10:06 [Permalink]
|
Ricky wrote:quote: This is something which it would need to teach itself.
Strictly analogously to human intelligence, no. Humans don't need to learn how to create or retrieve memories. That part is hard-wired into the brain.
To make a truly impressive AI, though, it'll have to have the capacity to re-build its own data storage and retrieval routines, to a certain extent. I'm not talking about it being able to rewire its own hardware, just the algorithms that execute upon that hardware.
Valiant Dancer wrote:quote: When dealing with the extremely complex nature of human abstract thought, the limitiations of knowledge in the area required to fabricate a program to handle every possibility, the group mechanics involved in producing such a program (the human condition is too complex for a single person to be able to fabricate), and the number of unknowns with no visible means of getting to a finished product, impossible is IMHO a valid descriptor, especially when computer programmers deal with concrete concepts when dealing with tasks. The entire non-concrete nature of abstract thought makes it impossible to program for. I have a more cynical view of humans. Computer programmers are temprimental. (I include myself in this.)
I, also being a programmer, have a very different view of how an AI will be accomplished. The "task" is certainly not to create a monolithic program capable of handling every possibility, but to create a small program, capable of modifying its own behaviour in response to its processing, and capable of communicating with other programs like itself. Toss a few hundred billion of these programs together (preferrably each with its own processor), hook a few of them permanently to an I/O device (like the Internet), and a few more of them to some reconfigurable databases with a boatload of storage, and you'll have a rudimentary brain which can begin learning.
At least, that's what I understood some 20 years ago, before I stopped following that field. Even then, it wasn't a matter of being impossible, it was a matter of waiting for the technology to be able to support such an undertaking. Given enough "virtual neurons," I don't see a barrier to abstract thought by the system as a whole. Each v-neuron certainly wouldn't be "thinking," but the overall processing might be able to wing it.
And the key is probably to avoid attempting to endow the program with code specific to what we think of as "abstract thought," anyway. My kid didn't have it when he was born (no infants have it, so far as I know). I was delighted to see him finally realize that when I hid his toy, it didn't cease to exist. Is there a good reason to think that an artificial brain should not need to go through a similar developmental process? |
- Dave W. (Private Msg, EMail) Evidently, I rock! Why not question something for a change? Visit Dave's Psoriasis Info, too. |
|
|
Skyhawk
New Member
33 Posts |
Posted - 09/07/2004 : 23:17:22 [Permalink]
|
Yeah, Dave W hit my point of my earlier post right on. The design of a brain cell is very simple. Neurons, etc. are simple enough for humans to research and understand. If you read articles on networking, so many things in nature as well as the tech industry use some sort of networking. The brain is a fine example of complex networks where you have main "data branches" and "nodes." If you create a bunch of programs that act like cells to be networked, a brain is possible. We don't need quantum computers to accomplish this (unless we are talking about decreasing the amount of power use for computers), but instead we need to research the behaviour of individual cells and parts of the brain. Also, creating a template (ie. DNA) to form such a program is required. This is the basic parameters necessary for it to initiate in existence.
Its funny how similar the tech and nature world are becoming. Both require some form of networking, initial "programming" (for humans this is instincts), etc. The point people keep mistaking is that programming is just creating parameters, creating the necessary algorthims for the parameters, and implementing it to do a certain task. This is conventional programming. But, ask any hacker or virus coders and it's a fine art. You can split tasks and have programs "talk" to each other and use resources in order to accomplish tasks and genetic programs can manipulate its own code to adapt to envrionments (thats how the legendary virus coders create their viruses. For further reading, I think there is a book called "Black Book of Viruses", by some famous virus coder, on the net which talks about creating programs to mimic actual viruses. Also talks about the abstract thought needed to create these programs.) I just hope these virus coders can be put to good use and share their genetic programming knowledge. |
|
|
Maverick
Skeptic Friend
Sweden
385 Posts |
Posted - 09/07/2004 : 23:42:56 [Permalink]
|
quote: Originally posted by Ricky
The problem is the programmer has to tell the computer how to handle all the data, how it would go about retrieving the data, how it would send out the data. This is something which it would need to teach itself.
Well it is possible to write a program that can learn how to do different things without a programmer telling it how. That way, the computer learns to accomplish something. However, it is true that the programmer must give a goal to accomplish. |
"Life is but a momentary glimpse of the wonder of this astonishing universe, and it is sad to see so many dreaming it away on spiritual fantasy." -- Carl Sagan |
|
|
BigPapaSmurf
SFN Die Hard
3192 Posts |
Posted - 09/08/2004 : 05:13:39 [Permalink]
|
Just make the goal, learn new things...Anyway it seems to me some of you are hung up on the AIM acting/developing like a human. If I were creating AI it would not be human like, we have enough problems with them. |
"...things I have neither seen nor experienced nor heard tell of from anybody else; things, what is more, that do not in fact exist and could not ever exist at all. So my readers must not believe a word I say." -Lucian on his book True History
"...They accept such things on faith alone, without any evidence. So if a fraudulent and cunning person who knows how to take advantage of a situation comes among them, he can make himself rich in a short time." -Lucian critical of early Christians c.166 AD From his book, De Morte Peregrini |
|
|
Valiant Dancer
Forum Goalie
USA
4826 Posts |
Posted - 09/08/2004 : 06:40:00 [Permalink]
|
quote: Originally posted by Dave W. I, also being a programmer, have a very different view of how an AI will be accomplished. The "task" is certainly not to create a monolithic program capable of handling every possibility, but to create a small program, capable of modifying its own behaviour in response to its processing, and capable of communicating with other programs like itself. Toss a few hundred billion of these programs together (preferrably each with its own processor), hook a few of them permanently to an I/O device (like the Internet), and a few more of them to some reconfigurable databases with a boatload of storage, and you'll have a rudimentary brain which can begin learning.
Again, we will have to disagree. I have never seen a program which was designed to adapt to changing situations be able to do something beyond the set of situations it was designed for especially when some of them contained concepts that the computer was never programmed the meaning for.
quote:
At least, that's what I understood some 20 years ago, before I stopped following that field. Even then, it wasn't a matter of being impossible, it was a matter of waiting for the technology to be able to support such an undertaking. Given enough "virtual neurons," I don't see a barrier to abstract thought by the system as a whole. Each v-neuron certainly wouldn't be "thinking," but the overall processing might be able to wing it.
As I understand nuerons, it isn't the operation of a specific neuron in particular. It is the interaction of neurons together which makes abstract thought possible.
quote:
And the key is probably to avoid attempting to endow the program with code specific to what we think of as "abstract thought," anyway. My kid didn't have it when he was born (no infants have it, so far as I know). I was delighted to see him finally realize that when I hid his toy, it didn't cease to exist. Is there a good reason to think that an artificial brain should not need to go through a similar developmental process?
Your child, as well as mine, had the capability of abstract thought throughout. He learned that items have permanence through the inquisitive nature of humans. How does one program inquisitiveness into an artificial entity? |
Cthulhu/Asmodeus when you're tired of voting for the lesser of two evils
Brother Cutlass of Reasoned Discussion |
|
|
Maverick
Skeptic Friend
Sweden
385 Posts |
Posted - 09/08/2004 : 06:45:37 [Permalink]
|
quote: Originally posted by BigPapaSmurf
Just make the goal, learn new things...Anyway it seems to me some of you are hung up on the AIM acting/developing like a human. If I were creating AI it would not be human like, we have enough problems with them.
The reason to make AI humanlike would be if they would interact with humans, in which case it might help. Other than that, AI can include many different kinds of intelligence, depending on what is needed. |
"Life is but a momentary glimpse of the wonder of this astonishing universe, and it is sad to see so many dreaming it away on spiritual fantasy." -- Carl Sagan |
|
|
Dude
SFN Die Hard
USA
6891 Posts |
Posted - 09/08/2004 : 08:51:19 [Permalink]
|
quote: Yeah, Dave W hit my point of my earlier post right on. The design of a brain cell is very simple. Neurons, etc. are simple enough for humans to research and understand.
There is some recent research (and I can't find a link to any article about it...grrr) that suggests certain cells in the brain previously thought to have no function other than as connective tissue, actually play an important role in memory and some other critical processes.
The brain is extremely complex, and we are just beginning to figure out the basics. |
Ignorance is preferable to error; and he is less remote from the truth who believes nothing, than he who believes what is wrong. -- Thomas Jefferson
"god :: the last refuge of a man with no answers and no argument." - G. Carlin
Hope, n. The handmaiden of desperation; the opiate of despair; the illegible signpost on the road to perdition. ~~ da filth |
|
|
|
|
|
|
|