Stick Page Forums Archive

Emotion and Artificial Intelligence

Started by: Ash | Replies: 81 | Views: 4,439

LunarDeath
2

Posts: 17
Joined: Feb 2010
Rep: 10

View Profile
Feb 4, 2010 8:09 PM #543520
Sorry i wasn't finished with my sentence when i was cut abruptly by the internet browser. Of course, artificially that is.. but im talking about the human-way of emotional responses and recognitions and such.


Well based on my opinion, artificial intelligence with "emotion" are already been developed nowadays. However, emotion may not be the correct word, if i must say, the robot are programmed to react with a certain stimuli. However i'm not against the fact that robots could show emotion at all, it's just that the human brain is more complicated than a computer. It consists of different "wires" thats different for every human. If you could see, the artificial intelligence that have been created could display around only seven emotions until now (such as the KOBIAN ), let's stop on that particular part shall we.
The Kobian AI which is developed as an emotionally extended artifical intelligence is based on the Maslow's pyramid of hierarchy of needs, which includes curiosity, expressing and responding to emotional appraisal or scolding, behaving according to the AI's need (which includes factors of internal aspects), and fear to develop the now KOBIAN AI with seven basic expression. This is where i will explain that each and every aspect of the hierachy of needs, are programmed. Take curiosity, head of the researcher could program the AI to be taken by objects or subjects that caught it's eye to be foreign. This is, of course not a very strange thing to the scientific academy. Adaptive curiosity, as they say it, it drives robots to learn from it's environment, but of course in terms of visual, auditory, and tactile, (maybe olfactory as well). OF course this adaptive curiosity is made so robots learn their environment, and let me stress a single word.. MADE. it is a program, a motivation system, which reinforced the AI's with a somehow intrinsic motivation as to know "what the ehll is that thing". But as i go to this particular part, first of all the AI's motivation is to define, and describe that object, however we, human are curious on some aspects because not only we want to know what it is, but also to experience it, and take a liking or a disgust out of it! AI don't come back again after identifying a frog, pick it up and then put it in your sister's bedroom, to see how she screams.
THen we come to recognizing and responding to expression. It's a program where as AI's "brains" are shoved thousand of human facial expression (such as when we're happy, the corner of our mouth rotated up and such) and then programmed again, if you see this, you should go like this or that. They may not know such thing as a fake smile, and also, every human being express themselves differently in some situation. Take exam period, if one of our friends got high grades and we're not we're either, one, congratulate the guy, and feel happy for him, or two, feel betrayed, show a frown to him, and maybe drag him to the back of the school or three, nothing. Where as AI's expression is only based on recognizing generalized human expression, and tone of our voice, through reasoning only they know how to smile or to be sad, through the program the researchers have put inside them. Take David Hanson's AI, he -recognized- facial expression THEN, it reacts to the person in front of him. What if there's nobody around, will the AI express emotion? when there's nothing they would base their expression from?
BEhaving to it's need is also part of emotion, but as far as i could see, AI's could only respond to the need for security, through fear, and the need to.. fullfill it's internal needs, such as charging their batteries. Again they are program, fear could be taught by showing the AI dangerous factors which could be detected by the five senses, by adapting the "fear method", such as in the book Clockwork Orange, tortured while watching violence (wow). It could be applied, however of course with method applying to the mechanical aspects of the AI itself. fullfilling an AI's needs, well.. maybe their obligation, such as the KOBIAN that are going to be programmed to work on nursery. Of course ojectives are laid out, list-of-what-to-do are input in codes and Tadah, you got yourself a nursery bot, but they'll work only for what they're programmed for, they won't fix the broken faucet if you want them to if they're not programed to do so. They won't help you lift your body again back to your bed after you fell down if they're not programmed to (maybe they'll stare at your face and express more of their emotion). While needs of an AI.. well, it's count-able, let see, charging battery or replenish source of energy supply, check, and what else? nothing.. They don't want to pee, they don't want to become famous, they don't want to learn about how to play skateboard and exprience severe pain when it fall, the AI's don't want to, because they have no want. Sometimes, this wants are the natural driver of our emotional behaviour. With hunger came the "starving" face, after fullfilling it we become either happy, or feeling bloated, weigh yourself, and swore you'll only eat biscuits for thirty days while crying in the corner of your room.
Basically what i'm trying to say was hat AI's have to feel and express emotion, by themselves and not just through reaction and interaction with others. THey should feel lonely when they're left alone, or happy when the work is done. THe human mind is very complex, and each one differs than the other. We react differently don't we? i might pet an animal and think it's cute, while other may feel disgusted and kick it far away from them. Human also feel attracted with different types of opposite sexes, they have desires, vanity, pride, prejudice, bias, the want to be acknowledge, and these are the driving forces of how we express our emotion. An AI could show emotion fully as we do until they fell in love, appreciate shakespheare while denying William Blake's poetry, Watching Tom and Jerry's cartoon and think it's funny or maybe not, or even see a dead kitten on the street and shed tears or even laugh at it. AI's don't, they're only constricted by the so called programs. Their minds are not Tabula Rasa, from the moment they are made, they're doomed to be mechanism, such as the law of mechanism stated, everything is predictable, down to the last bit of slight joint movement, everything is programmed and it won't go out of place. (unless of course we're talking about berserk situation, which occurs mostly in movies) And that's not the human way of expressing emotion, (as emotion could be affected through reasoning, language and so on) we're allowed to choose and the AI's don't, they're made that way and will forever be that way (well until they could finaly develop own conciousness). Untilthen they'll be Artifiial emotionally extended intelligence.
As for experiencing pain, maybe not now, but by modifying the present contact sensors that have been developed and then applied to few helper robot, and the AEEI by implementing synthetic neuro system that could be programmed to send pain-message to the "brain" of the AI, and of course, with a few programs tweaks, based on our pain or contact delivery system, Robots may be able to experience pain, but i don't think it'll be convenient, that is all.
Zed
2

Posts: 11,572
Joined: Feb 2009
Rep: 10

View Profile
Feb 4, 2010 8:30 PM #543533
That's a wall of text to be proud of. I'm still working my way through it, but from scanning it quickly I think I broadly agree with you. Welcome to the debate section.
LunarDeath
2

Posts: 17
Joined: Feb 2010
Rep: 10

View Profile
Feb 4, 2010 8:37 PM #543537
Thank you for your warm welcome and also for the compliment,
i've actually re-written it 2 times, because apparently when i submit my first mountain of texts, it dissapeared, completely.
Exile
Administrator
2

Posts: 8,404
Joined: Dec 2005
Rep: 10

View Profile
Feb 4, 2010 11:25 PM #543612
Quote from Ash
"Will ever be able to create"? Why so certain?


That's the one thing you picked out of that post to analyze? Come on. It might happen but as far as I'm concerned, it probably won't. There's enough in that post to give reasons why I think that way.

Quote from Zed
For those who don't read big paragraphs:
-anything physical can be reproduced


Says who? And is this possibility speaking theoretically or practically? Because yeah, theoretically we may be able to reproduce anything, but when it comes to actually doing it those odds tend to change.

Great post LunarDeath, it sounds like core of your argument is that robots can be programmed to seemlessly replicate emotions but lack the ability to develop new ones that are beyond their programming ability -- that's more or less why I don't think we'll ever be able to develop technology like that. It would have to be able to adapt to stimuli appropriately even if it's beyond its parameters, which is a technology we've never been able to replicate.
Zed
2

Posts: 11,572
Joined: Feb 2009
Rep: 10

View Profile
Feb 4, 2010 11:29 PM #543618
I confess, it's only theoretically possible, but it seems irrational to say that we will never be able to do something if it can be done in theory.
Exile
Administrator
2

Posts: 8,404
Joined: Dec 2005
Rep: 10

View Profile
Feb 5, 2010 12:45 AM #543655
Quote from Zed
I confess, it's only theoretically possible, but it seems irrational to say that we will never be able to do something if it can be done in theory.


It's not. Replicating human emotions would require virtual perfection, because even tiny errors in a human brain create less-than-tiny defects. We don't even understand the human brain cohesively, so it's completely rational by now to say that we'll both acquire absolute perfection in both understanding the human brain and how to build a synthetic one.

Just for an example, look at Rubik's cubes.. it takes somewhere around 20-22 moves to solve any configuration of the cube, and the world's fastest speedcubers could probably do that many turns in 2-4 seconds. That would be considered the theoretical minimum solution time, but the best we've come to is somewhere around 7-8 seconds. Do you honestly think we'll ever reach that threshold? The answer's no, simply because of human error. Humans aren't perfect, and that translates into not achieving theoretical perfect situations in anything but the most simple of tasks, and synthetically replicating a human brain is not one of those things.

The only thing we can ever hope to do is come close, but as I said, human brains with defects create catastrophic differences from a normal brain, so how close would we really be getting?
Ash
2

Posts: 5,269
Joined: Nov 2005
Rep: 10

View Profile
Feb 5, 2010 1:06 AM #543657
Quote from Exilement
It's not. Replicating human emotions would require virtual perfection, because even tiny errors in a human brain create less-than-tiny defects. We don't even understand the human brain cohesively, so it's completely rational by now to say that we'll both acquire absolute perfection in both understanding the human brain and how to build a synthetic one.

Just for an example, look at Rubik's cubes.. it takes somewhere around 20-22 moves to solve any configuration of the cube, and the world's fastest speedcubers could probably do that many turns in 2-4 seconds. That would be considered the theoretical minimum solution time, but the best we've come to is somewhere around 7-8 seconds. Do you honestly think we'll ever reach that threshold? The answer's no, simply because of human error. Humans aren't perfect, and that translates into not achieving theoretical perfect situations in anything but the most simple of tasks, and synthetically replicating a human brain is not one of those things.

The only thing we can ever hope to do is come close, but as I said, human brains with defects create catastrophic differences from a normal brain, so how close would we really be getting?


Arguing that humans are imperfect sounds to me like a cop-out, because modeling the behavior of the human brain doesn't require so high a degree of accuracy that we should call it perfection. After all, we aren't talking about replicating the exact atomic structure of one specific brain, that's just excessive. We're talking about recreating the behavior of a brain to the point where it is indistinguishable from a human. I don't think the fact that the humans doing the poking and prodding are imperfect means that it's automatically destined to fail.
Exile
Administrator
2

Posts: 8,404
Joined: Dec 2005
Rep: 10

View Profile
Feb 5, 2010 4:21 AM #543718
Quote from Ash
Arguing that humans are imperfect sounds to me like a cop-out, because modeling the behavior of the human brain doesn't require so high a degree of accuracy that we should call it perfection.


Err, I'm pretty sure for a fake to be indistinguishable from the real thing, it has to be about as close to perfect as possible.

After all, we aren't talking about replicating the exact atomic structure of one specific brain, that's just excessive. We're talking about recreating the behavior of a brain to the point where it is indistinguishable from a human.


Are these two things really that different? For something to recreate those behaviors it needs to receive, interpret and process any kind of stimuli, new or familiar, in the same way a human brain does. As far as I'm concerned nothing is going to be able to mimic that in any indistinguishable way without trying to replicate the atomic structure of a brain.

I don't think the fact that the humans doing the poking and prodding are imperfect means that it's automatically destined to fail.


I'm already convinced that it's never going to happen, nothing you've said has made me think otherwise and until it does, human error is more than enough of a reason to think it will. It's not a cop-out at all, especially since trends agree with me. I can't think of any absolutely flawless things humans have created, and if there are any, they're things whose functions are insignificantly as complex as a human brain.

For the record I do believe we'll come very close to mimicking the human brain artificially. But no matter how good a mimic is, it's still distinguishable from the real thing if you throw enough stimuli at it. What you're talking about is perfect replication of human behavior, and since we don't even understand all the processes that go into it on a natural level, I refuse to believe that we'll ever reach a point where we'll be able to replicate them perfectly on a synthetic level. Though as far as I'm concerned "indistinguishable" is a bar that you've set way too high.
Ash
2

Posts: 5,269
Joined: Nov 2005
Rep: 10

View Profile
Feb 5, 2010 5:02 AM #543726
Quote from Exilement
Err, I'm pretty sure for a fake to be indistinguishable from the real thing, it has to be about as close to perfect as possible.



Are these two things really that different? For something to recreate those behaviors it needs to receive, interpret and process any kind of stimuli, new or familiar, in the same way a human brain does. As far as I'm concerned nothing is going to be able to mimic that in any indistinguishable way without trying to replicate the atomic structure of a brain.



I'm already convinced that it's never going to happen, nothing you've said has made me think otherwise and until it does, human error is more than enough of a reason to think it will. It's not a cop-out at all, especially since trends agree with me. I can't think of any absolutely flawless things humans have created, and if there are any, they're things whose functions are insignificantly as complex as a human brain.

For the record I do believe we'll come very close to mimicking the human brain artificially. But no matter how good a mimic is, it's still distinguishable from the real thing if you throw enough stimuli at it. What you're talking about is perfect replication of human behavior, and since we don't even understand all the processes that go into it on a natural level, I refuse to believe that we'll ever reach a point where we'll be able to replicate them perfectly on a synthetic level. Though as far as I'm concerned "indistinguishable" is a bar that you've set way too high.


Exilement, you've taken the usage of the word "indistinguishable" and conflated it to be so much more than we meant. You KNOW that no one was talking about 100% perfection ANYWHERE in this thread. You KNOW that. It's like if I were to ask someone to create a perfect replica of the White House and you were to point out that a particular amoeba is missing from a specific cockroach.


Indistinguishable means indistinguishable to someone not spending 40,000 man-hours and a couple hundred thousand dollars on tests, indistinguishable to any given individual.

"See, this person gave output on my emotion scale that was well below what any human would have given if told his mother was out of town while answering trivia questions about Victor Hugo novels, he's obviously an AI!"


And I can't believe I have to point this out, but "close to perfection" and "perfection" aren't the same thing. Anything less than perfect is going to have limits.
Zed
2

Posts: 11,572
Joined: Feb 2009
Rep: 10

View Profile
Feb 5, 2010 6:17 PM #543907
We don't have to physically recreate a human brain down to the last atom. All we need to do is programme a computer with the laws of physics and then use an advanced brain scanner to tell that computer where all the atoms should go to start off with. I see no reason why someone's brain could not be x-rayed (or gamma rayed or whatever is necessary on brains, we may not have discovered it yet) in enough detail to place a coordinate on each atom at a given time.

The fact of human error doesn't mean this can't be done eventually. Let's take your Rubik's cube example. Get five hundred really dexterous people in a room and give each of them a different preset sequence of twists. Then keep on making random cubes and throwing them in. Eventually someone's going to get it done right. In the same way as you say enough stimuli will show a slightly imperfect machine for what it is, enough energy devoted to this will create a perfect machine, even if it takes some trial and error.
Exile
Administrator
2

Posts: 8,404
Joined: Dec 2005
Rep: 10

View Profile
Feb 6, 2010 5:21 AM #544013
Quote from Ash
Exilement, you've taken the usage of the word "indistinguishable" and conflated it to be so much more than we meant. You KNOW that no one was talking about 100% perfection ANYWHERE in this thread. You KNOW that. It's like if I were to ask someone to create a perfect replica of the White House and you were to point out that a particular amoeba is missing from a specific cockroach.


I used it in the context of its own definition, don't tell me that I KNEW that you meant it as something else. When you say "AI indistinguishable from human intelligence" I take that to mean "incapable of being perceived as different", which means it's an exact, perfect replica. Excuse me if that was some sort of profound assumption.

Indistinguishable means indistinguishable to someone not spending 40,000 man-hours and a couple hundred thousand dollars on tests, indistinguishable to any given individual.


Okay, so what part of my argument made the task so much harder than this? Is there something about recreating all of the processes of a brain that deal with emotion that wouldn't go into creating AI that's indistinguishable to any given individual? I'm legitimately confused at where you're getting at here.

If you just think that we can program all the possible scenarios and hope a robot can mimic the human brain's functions without actually replicating them, LunarDeath covered why it wouldn't work very nicely.

Quote from Zed
The fact of human error doesn't mean this can't be done eventually. Let's take your Rubik's cube example.


Okay, maybe the cube was a bad example, mostly because those 22 turns are determined mathematically, which doesn't relate to how people actually solve cubes, regardless of the method. So, no, your scenario wouldn't work, but I guess that doesn't prove or disprove anything.

Also, ducking behind hypothetical technological advances that are beyond anything we can even perceive now (atomic/molecular technology being autonomously rearranged according to medical scans?) really doesn't help to prove anything. Even I can acknowledge that yeah, technologically we see breakthroughs every day, but it really doesn't help your argument to say it might someday apply to this.
Zed
2

Posts: 11,572
Joined: Feb 2009
Rep: 10

View Profile
Feb 6, 2010 9:10 AM #544082
I wanted to rearange my argument last night but the forums were down so I couldn't. I intend to take a different line of argument though:

In terms of human capabilities, there can be limits. There is no reason to think we could ever run 100m in 3 seconds because that would require someone to be born who was absolutely spectacular. The problem in this case is that you start right from scratch with each birth until you hit the jackpot.

Technology isn't like that, however. Once you have the technology, it's there and you can add to it. You don't have to come up with the concept of the wheel yourself to build a bicycle, for instance. Technology is always improving. The only real limits to how far technology can improve is what is physically possible.
Ash
2

Posts: 5,269
Joined: Nov 2005
Rep: 10

View Profile
Feb 7, 2010 12:15 AM #544266
Quote from Exilement
I used it in the context of its own definition, don't tell me that I KNEW that you meant it as something else. When you say "AI indistinguishable from human intelligence" I take that to mean "incapable of being perceived as different", which means it's an exact, perfect replica. Excuse me if that was some sort of profound assumption.


A program that replicates human behavior is a program, not a human brain, so there will automatically be a limit to how far it can be replicated, but that limit is irrelevant to this discussion, because no one would ever be interested in exceeding that limit. Is that not plainly obvious? Did you think that we were talking about a brain that would, under an electron microscope, be indistinguishable? You kind of ignored my white house analogy. If a person claimed he made a perfect replica of the white house indistinguishable from the real thing, would you point out that a particular atom from a wooden cross-beam was missing?


Okay, so what part of my argument made the task so much harder than this? Is there something about recreating all of the processes of a brain that deal with emotion that wouldn't go into creating AI that's indistinguishable to any given individual? I'm legitimately confused at where you're getting at here.
I'm sorry, I don't understand what you are asking here. What I was getting at in that post was that you are assuming that replication means ATOMIC replication. You don't seem to understand the difference between replication of the physical PROCESSES of the brain and the physical STRUCTURE of the brain. I don't deny that missing the influence of 3 atoms on the behavior of a human might be enough to make the AI no longer "indistinguishable" in your overly literal definition, but it would NOT matter to the "target audience".

If you just think that we can program all the possible scenarios and hope a robot can mimic the human brain's functions without actually replicating them, LunarDeath covered why it wouldn't work very nicely.

That's not how programming works. When making a program, you don't make a table of every possible circumstance and the appropriate reaction. You create algorithms that put data through logic. This makes the creation of an AI less a matter of understanding the SITUATIONS that it will encounter, and more of understanding what makes a brain tick.

A good comparison would be a digital watch. If you want to replicate its behavior, you COULD spend years making a table of every sequence of button presses and their relation to the current screen display followed by the appropriate action to take.
For example, on the original watch, the user pushes the "mode" button twice, and the screen first changes to display the change time and date screen, and then the stopwatch screen. Then, the user pushes the "start" button, and the stop watch goes untill finally the user pushes "stop".

A table appoach to the last step, stopping the watch, would require you to have a table entry for every single millisecond on into infinity, or else some user would have the time running for longer than you programmed the watch to handle and suddenly the watch computer would encounter a complete crash. If a user uses a button or time combination you didn't anticipate, then the watch wouldn't be able to handle that.

Alternatively, you can take the algorithmic approach, and suddenly everything is much easier. Instead of a couple billion lines of code, you only need a few hundred, maybe less. Your program is no longer checking the current time and then checking on what it has to do, now it's changing a variable and another part of the program is changing something.

Replicating human behavior includes replicating things like memories, idle musings, and even primal desires and instincts. A proper AI would have to be programmed with a full life, or even better, but into a simulation of a full life.

Basically, it would have to want to have to **** the shit out of that blonde, or perhaps be gay, or maybe have the personality of a serial rapist. The same elements that go into human behavior would have to be included.

Also, ducking behind hypothetical technological advances that are beyond anything we can even perceive now (atomic/molecular technology being autonomously rearranged according to medical scans?) really doesn't help to prove anything. Even I can acknowledge that yeah, technologically we see breakthroughs every day, but it really doesn't help your argument to say it might someday apply to this.


Like I said earlier in this post, atomic replication is not important, behavioral replication is.
Exile
Administrator
2

Posts: 8,404
Joined: Dec 2005
Rep: 10

View Profile
Feb 7, 2010 6:26 AM #544350
Quote from Ash
You kind of ignored my white house analogy.


Mostly because I don't recall ever trying to say that the only way to recreate a human brain artificially would be by creating an anatomically correct physical substitute along with all the other working processes. I'm focusing on synthetically recreating the processes of how thoughts and emotions are carried out in the brain, not on creating little machines that are physically identical to a neuron to make it look like a brain when all is said and done. Which is reinforced by this:

You don't seem to understand the difference between replication of the physical PROCESSES of the brain and the physical STRUCTURE of the brain.


If I ever mentioned "structure" (which I don't recall doing but I'm not about to comb through my previous posts) I was talking about the structure of the processes the human mind goes through in order to create thoughts and emotion. It's not something that I think can be duplicated with a different type of programming in any convincing way.

That's not how programming works. When making a program, you don't make a table of every possible circumstance and the appropriate reaction. You create algorithms that put data through logic.


Again, you misunderstood what I said, but I can see why, I worded that really badly.

However, no, that's not how programming works either. You don't create algorithms that put data through logic, you create algorithms that feed variables into data. That data analyzes the variable based on factors, and as long as the variables meet certain factors, it spits out predetermined actions, those factors/actions being as complex as the coding that goes into it. I still recall the debate we had about what logic is and I'm amazed that you were willing to throw that word around like you just did. Nothing about programming is logical, it just compares incoming data to what it already knows through what was programmed into it.

Which is what I meant by "situations", forgive me for the terminology, but you need to create enough core data so that incoming situations or variables that this AI would experience would create a realistic, human-like emotional response. And quite frankly I don't think that's possible, given the infinite number of variables and the complexity with which humans interact to them.

Thanks for the comparison to the digital watch but I was in no way saying that if the AI was to ever react to a piano being dropped off of a construction crane onto a Puerto Rican's fruit stand carrying papayas but no mangoes on the 13th of February at noon in San Francisco, the AI would have to have a pre-determined response programmed into it, and if it didn't it wouldn't meet your criteria of "indistinguishable". For all the years I've been here I would think you would know by now that I'm smarter enough to suggest that, that's completely idiotic.

Basically, it would have to want to have to **** the shit out of that blonde, or perhaps be gay, or maybe have the personality of a serial rapist. The same elements that go into human behavior would have to be included.


Okay, so program ideal facial ratios and characteristics into an AI and it might be able to do those first two things, we already have facial-recognition software. Those are simple variables. The problem is the kinds of variables that are simple to humans but would be abstract to AI. Humans have years of context to support their reactions to stimuli, humans have physical responses, humans are conscious about how other people will react and how their actions will effect the future. These aren't things I ever see an AI doing, and these are elementary things that the average person would probably be able to see through.

I just don't see why you think human experiences are something that can be both represented in lines of code and plugged into an AI with the expectation that it'll accurately replicate human behavior, even on a superficial level.
Ash
2

Posts: 5,269
Joined: Nov 2005
Rep: 10

View Profile
Feb 7, 2010 4:28 PM #544501
Quote from Exilement
Mostly because I don't recall ever trying to say that the only way to recreate a human brain artificially would be by creating an anatomically correct physical substitute along with all the other working processes. I'm focusing on synthetically recreating the processes of how thoughts and emotions are carried out in the brain, not on creating little machines that are physically identical to a neuron to make it look like a brain when all is said and done. Which is reinforced by this:



If I ever mentioned "structure" (which I don't recall doing but I'm not about to comb through my previous posts) I was talking about the structure of the processes the human mind goes through in order to create thoughts and emotion. It's not something that I think can be duplicated with a different type of programming in any convincing way.



Again, you misunderstood what I said, but I can see why, I worded that really badly.

However, no, that's not how programming works either. You don't create algorithms that put data through logic, you create algorithms that feed variables into data. That data analyzes the variable based on factors, and as long as the variables meet certain factors, it spits out predetermined actions, those factors/actions being as complex as the coding that goes into it. I still recall the debate we had about what logic is and I'm amazed that you were willing to throw that word around like you just did. Nothing about programming is logical, it just compares incoming data to what it already knows through what was programmed into it.

Erm... No, you've got your wording screwed up. A variable is a placeholder for data, it's not something that you put into data. The position of an object on the Y axis is a piece of data. Data is what a variable contains, so feeding a variable into data makes no sense.

x=mouse_x+16 is an example of an algorithm I grabbed from an actual program. x is a variable. mouse_x is another variable. What "mouse_x" contains is the current position on the x axis of the mouse cursor. That number is an example of data. x is the position of the object executing this code on the x axis.

You may already understand all of this, but your definitions are clearly incorrect. Data would be a point of information, while a variable is a container to move this information from place to place in a program or algorithm.

Which is what I meant by "situations", forgive me for the terminology, but you need to create enough core data so that incoming situations or variables that this AI would experience would create a realistic, human-like emotional response. And quite frankly I don't think that's possible, given the infinite number of variables and the complexity with which humans interact to them.

Thanks for the comparison to the digital watch but I was in no way saying that if the AI was to ever react to a piano being dropped off of a construction crane onto a Puerto Rican's fruit stand carrying papayas but no mangoes on the 13th of February at noon in San Francisco, the AI would have to have a pre-determined response programmed into it, and if it didn't it wouldn't meet your criteria of "indistinguishable". For all the years I've been here I would think you would know by now that I'm smarter enough to suggest that, that's completely idiotic.

That was meant to be a response to Lunar Death's earlier post, which you cited earlier. And i c wat u did dere




Okay, so program ideal facial ratios and characteristics into an AI and it might be able to do those first two things, we already have facial-recognition software. Those are simple variables. The problem is the kinds of variables that are simple to humans but would be abstract to AI. Humans have years of context to support their reactions to stimuli, humans have physical responses, humans are conscious about how other people will react and how their actions will effect the future. These aren't things I ever see an AI doing, and these are elementary things that the average person would probably be able to see through.

I just don't see why you think human experiences are something that can be both represented in lines of code and plugged into an AI with the expectation that it'll accurately replicate human behavior, even on a superficial level.


Okay, so basically we've gone through a long string of misunderstandings and I think we've both lost the points of the other, so I'll move back a bit and start in a new direction to help this debate move smoother.

The human brain is adaptive. Certain stimuli can create whole new ways to process information. The complexity of an adult brain doesn't come about because of the complexity of the brain itself, but because the relatively simple action of learning has built all sorts of new pathways.

This AI would have to have the capability to do this. Obviously this is a complex thing to do, but the brain DOES EXACTLY THIS, and it does it in a NATURALISTIC MANNER. This means that if given enough time, we can eventually create code able to mimic the ability to learn, and from there we're home free. If you agree that replicating the learning behavior of the brain is possible, then by extension mimicking it to the point where it can, say, pass a Turing test will take time, but will be possible.

I think its worth pointing out that I by no means think that the experiences of the AI would have to be coded in. They would have to be put in as data, or stimuli, just like how a human experiences things through stimuli.
Website Version: 1.0.4
© 2025 Max Games. All rights reserved.