Stick Page Forums Archive

Emotion and Artificial Intelligence

Started by: Ash | Replies: 81 | Views: 4,439

OGrilla
2

Posts: 602
Joined: Apr 2006
Rep: 10

View Profile
Feb 8, 2010 7:41 AM #544761
And this, I believe, is where my previous post on memristors comes in. They are a new electronic component which are able to change and remember the amount of resistance they exert on a current. You can even switch off and on the current and they will retain their resistance until it is once again changed. I believe we will be able to use these to create more "realistic" neural networks by stringing tons of nano-sized memristors together into a three dimensional grid of connections, just as neurons are. I believe we'll be able to make very malleable circuits in the future which will allow for behaviors to be learned.

We can have multiple processors running in the background with dedicated "inherent" abilities, such as programs for vision, tactile sensation, audio, and chemical analysis. The five senses will be developed as well or even better than our own. We're making our way in this direction, but it will take some time. The same way stegosaurus had a brain near its hip, we should be able to place the processing power of a robot all over its body. It will have no organs or muscles. No real tissue to worry about. The more evenly spread out and the more space used inside the robot will mean more processing power and quicker reaction times.

I know I'm going way off the subject of artificial intelligence and emotion, but I'm doing it for a reason. Our production and design of robots will change in the future as well as our technology and understanding. We may be able to mimic human behavior and emotions better than humans can. Given dozens of brains as opposed to one and if allowed to communicate between them as they're all multi-tasking, a truly learning robot could advance rather quickly. The learning bit really comes in connections and pattern recognition. This is where the memristors come in with their extremely flexible nature. To recognize and remember new patterns and then to create new connections with those patterns to develop the ability to recognize more.

It's been a long time since I really thought about subjects like this, but bear with my ignorance of the terminology. I tried to explain myself as well as possible.
LunarDeath
2

Posts: 17
Joined: Feb 2010
Rep: 10

View Profile
Feb 8, 2010 5:22 PM #544847
Quote from Ash


The human brain is adaptive. Certain stimuli can create whole new ways to process information. The complexity of an adult brain doesn't come about because of the complexity of the brain itself, but because the relatively simple action of learning has built all sorts of new pathways.

This AI would have to have the capability to do this. Obviously this is a complex thing to do, but the brain DOES EXACTLY THIS, and it does it in a NATURALISTIC MANNER. This means that if given enough time, we can eventually create code able to mimic the ability to learn, and from there we're home free. If you agree that replicating the learning behavior of the brain is possible, then by extension mimicking it to the point where it can, say, pass a Turing test will take time, but will be possible.

I think its worth pointing out that I by no means think that the experiences of the AI would have to be coded in. They would have to be put in as data, or stimuli, just like how a human experiences things through stimuli.


I have to submit evidence that says that AI"s are now being developed with the capabillity of "learning". I could grant you as far as that... however, this learning abillity only goes as far as i have mentioned earlier in my first post, through the five senses. Learning through the use of it, only going back to recognizing and responding again. I see a little kitten, and the flesh-bearer say it's cute, then it'll respond the way it saw the flesh-bearer respond. However, let me grant you a situation, where different type of people reacts differently to an aspect, will the AI "brain" able to process, compute and choose the best way to respond? [this situation is where the AI is granted only capabillity to learn how to react emotionally, without any code about responding emotionally at all] There might be a possibilty if the creator install a somekind of "judgement system" , and then it reacts [again] according to the data, not the way we express our emotion.
This is going back again to my theory that all of AI's capabillities in expressing their own emotions are "coded in". If what you say is true, to replicate OUR way of learning, the AI would be able to learn to create their own emotion, may i bring you toward the theory of knowledge... how reason, language, sensory perception and EMOTION grant us the capabillity to learn... it's like after finding out the chicken came before the egg, now the egg came before it... According to this theory, emotion help us at making decision on how to respond at something. Let me get back to the kitten situation.. an AI who's programmed that all kittens is cute, would even say an ugly one who almost die with cut and burns everywhere cute.. not even feeling sorry for it..however we as human would act fast to rescue it or kill it...
What im trying to gain from this theory is that emotion let us evaluate morally, ethically and so on and so forth.. where as as i could see that an AI with only programming and datas and their technologies are designed to recognize and respond.. of course i could say that we human beings also do that , but we don't just stop there, thats what. We don't let ourselves be in a tight minded situation when we react, like the AI's who are well....., programmed to do it, and would not go against it.
The theory of mechanism won't work with us, in some extent there will be a limit where human being could replicate the way our brain works, unless the creators of the AI's give them some kind of conciousness, i couldn't see how it'll develop learning abillity through the ways of knowing and react emotionally as we did..
Bear in mind the learning system developed have gone as far as only through the five sensory of perception, and may go to the use of reasoning, and language, and didn't even brush a single speck on the emotion "capabillities" even if they provide evidence that i would personally call a false claim if it were to be called expressing "emotion". I mean let's be realistic here, how many months it'll take to even cover about basic people facial expression, and change of tone, and how the AI's would respond to it? Years. And it only covers the way of learning through the five sensory of perception... how many years it'll take to learn by using reasoning? or logic, or by using language and other external influences or "stimuli"? and how long does it takes to make the "brain" of the AI to be able to process it all, and come with it's own judgement as we are now? I've lost count.

@Ogrilla
The very idea is genious, it adapt the surface area to volume ratio from cells. I might say that it'll work, through the sensory of perception AI's may learn human behaviour, billions and billions and human behaviour. And as i quoted, that the AI's will be more capable in mimicking human emotion and behaviour than we could, but there are so many human behaviours out there affected by culture, religion, geographical position, surrounding, any internal and external aspects, and so many ways to express their emotions.
I don't want to challenge that statement further yet, and i think it's quite clear :). However, there's a few question i'd like to ask...
Does the AI specifically adapt a certain human behaviour and emotion patterns? If so, does the decision derived by programming or own judgement?
If the AI's , as you say, develop the abillity to recognize more.. doesn't that mean that the AI's again only recognizing and then responding and not considering other aspect as to respond to a particular manner?
OGrilla
2

Posts: 602
Joined: Apr 2006
Rep: 10

View Profile
Feb 8, 2010 7:58 PM #544903
That's precisely what I believe will occur. I think that without cultural influence, they will not be able to form a basis on which to respond to stimuli such as a kitten who is severely injured. A robot or a computer simply have no way of feeling bad for animals, even humans, unless it's programmed in. Even through observation and recognition of patterns of behavior, the machines will see nothing more than another mode of being for that subject. I believe we will have to teach them, but if we tell them about our culture and our ways, give them true pain receptors and a link to the punishment they receive by actuating those receptors to how we experience pain and tissue damage, then perhaps they will understand. Yet without consciousness, they won't know of mortality and their perception of our emotions won't be complete or authentic until they have the ability to recognize themselves and furthermore they need to have a sense of self. Meaning that they must acknowledge their own existence and care about how their future plays out, or at least understand that their future is not pre-determined. Once they understand this and once it is true, then true emotion and true artificial intelligence will be reached.

But this is the stepping stone that transformed us into truly intelligent animals and we will only succeed in bringing a new form of life to this world(metallic/silicon-based life) when we have reached this step. A being who cares about their own fate and the fate of their kind will want to reproduce themselves or bring our other creations to the same realization they had. They will worry about sustenance and about metabolism in the form of temperature control (I imagine a robot with as much processing power as I envision being necessary to reach this point would require quite a lot of coolant....) and task managing. Growth in the form of the acquisition of new parts, new methods of thinking, new programs, etc.

As I said in an earlier post, however, I don't see this happening in our lifetimes. Unless we see true anti-aging treatments becoming available and affordable. I think we will be able to see it coming, though. I was interested in robotics when I was younger and this was something I contemplated often. I do honestly believe it will happen given enough time.
Zed
2

Posts: 11,572
Joined: Feb 2009
Rep: 10

View Profile
Feb 8, 2010 8:52 PM #544919
Normally when the debate hits this point the massive chunks of twisted logic are punctuated with quotes. This shit's so scary aima break it up a little.

Quote from LunarDeath
I have to submit evidence that says that AI"s are now being developed with the capabillity of "learning". I could grant you as far as that... however, this learning abillity only goes as far as i have mentioned earlier in my first post, through the five senses.


Not true. One of the most important inputs to the human brain is its own previous output. The thoughts that we have are part of a system that loops back in on itself. We have the ability to not only think about what we see, but also to think about those thoughts. Once you have a looped system you have the beginnings of extending logic out beyond the lines of code that were put there at the start.

Learning through the use of it, only going back to recognizing and responding again. I see a little kitten, and the flesh-bearer say it's cute, then it'll respond the way it saw the flesh-bearer respond. However, let me grant you a situation, where different type of people reacts differently to an aspect, will the AI "brain" able to process, compute and choose the best way to respond? [this situation is where the AI is granted only capabillity to learn how to react emotionally, without any code about responding emotionally at all] There might be a possibilty if the creator install a somekind of "judgement system" , and then it reacts [again] according to the data, not the way we express our emotion.


What makes you think that this is not the way we have emotions? Generally we have our feelings based on a combination of experience and instinct (DNA; read "programming"). If experience tells us that something usually has bad consequences we are less likely to feel good about it. If instinct tells us that we should fear something, we fear it.

This is going back again to my theory that all of AI's capabillities in expressing their own emotions are "coded in". If what you say is true, to replicate OUR way of learning, the AI would be able to learn to create their own emotion,


It needs some emotions preprogrammed - lust upon seeing a healthy female for instance - but others need to be learnt in the same way that a baby has no innate fear of crossing the road. The AI would need to be able to experience emotions from the start, and learn what to apply them to later.

may i bring you toward the theory of knowledge... how reason, language, sensory perception and EMOTION grant us the capabillity to learn... it's like after finding out the chicken came before the egg, now the egg came before it... According to this theory, emotion help us at making decision on how to respond at something. Let me get back to the kitten situation.. an AI who's programmed that all kittens is cute, would even say an ugly one who almost die with cut and burns everywhere cute.. not even feeling sorry for it..however we as human would act fast to rescue it or kill it...


Thinking of certain things as cute must be intuition - do you remember having a class at school where teachers defined for you what was cute and what was not? Pity for the dying creature is another emotion separate from attraction, and will need to be programmed/taught differently.

What im trying to gain from this theory is that emotion let us evaluate morally, ethically and so on and so forth.. where as as i could see that an AI with only programming and datas and their technologies are designed to recognize and respond.. of course i could say that we human beings also do that , but we don't just stop there, thats what. We don't let ourselves be in a tight minded situation when we react, like the AI's who are well....., programmed to do it, and would not go against it.


That's just more advanced programming. If a computer is told "all kittens are cute" then yeah, there will be flaws. If it's told "all kittens are cute unless they look like they tried to outstare a speeding car" then you're closer. If you put enough in there it'll work.

The theory of mechanism won't work with us, in some extent there will be a limit where human being could replicate the way our brain works, unless the creators of the AI's give them some kind of conciousness, i couldn't see how it'll develop learning abillity through the ways of knowing and react emotionally as we did..


What is consciousness? As far as I can tell it's nothing more than thinking about thoughts. Once the output is connected to the input you have a consciousness of sorts.

Bear in mind the learning system developed have gone as far as only through the five sensory of perception, and may go to the use of reasoning, and language, and didn't even brush a single speck on the emotion "capabillities" even if they provide evidence that i would personally call a false claim if it were to be called expressing "emotion". I mean let's be realistic here, how many months it'll take to even cover about basic people facial expression, and change of tone, and how the AI's would respond to it? Years. And it only covers the way of learning through the five sensory of perception... how many years it'll take to learn by using reasoning? or logic, or by using language and other external influences or "stimuli"? and how long does it takes to make the "brain" of the AI to be able to process it all, and come with it's own judgement as we are now? I've lost count.


It'll take a while to make one, yes, but that doesn't mean it can't be done. It's taken billions of years to get to the stage we're at now, but nevertheless it has happened.
OGrilla
2

Posts: 602
Joined: Apr 2006
Rep: 10

View Profile
Feb 9, 2010 7:27 AM #545113
http://www.popsci.com/scitech/article/2009-07/computerized-rat-brain-spontaneously-develops-complex-patterns
LunarDeath
2

Posts: 17
Joined: Feb 2010
Rep: 10

View Profile
Feb 9, 2010 2:21 PM #545154
@ Zed touche! and thank you for correcting me, however may i correct my sentence on the first line, what i meant is the AI's learning capabillity, for now, only reach until learning through the five senses... i do believe i didn't mention about our brain to learn only through the five senses... and ah yes instinct, experience and intuition, play an important part in our "judgement" system (pardon the quotes), it let us respond the way we do now, correct me if i'm wrong though..
Yes it has happened, i agree to you on that and may i say that i have a slight hope that the AI we envisioned to be, would be created one day, but i have also a slight prediction also where as a debate whether AI should have emotion or not would occur, and the whole theory would be questioned again, but of course, let's not get to that part, still a long way to go.

@Ogrilla True, i see where you are going, and i have also read the article. If researcher were to make a breakthrough of virtual human brain that develop and "work" as what is given from the article, i don't see anymore reason why to say that expressing emotion through different situation not just by some recognize and respond system would be impossible, and we won't be seeing it much soon, i'd have to agree on that though, even if the virtual human brain are able to develop as much as the rat brain, there's still countless of years of exploration of technologies needed to support it, we could only wait.
OGrilla
2

Posts: 602
Joined: Apr 2006
Rep: 10

View Profile
Apr 2, 2010 10:51 PM #562827
http://hplusmagazine.com/articles/ai/synapse-chip
Website Version: 1.0.4
© 2025 Max Games. All rights reserved.