Do you know Freud?
Freud? Siegmund Freud?
Sure you know.
But do you know the “Pleasure Principle”? Wikipedia tells you “Quite simply, the pleasure principle drives one to seek pleasure and to avoid pain”.
Yes, you know, too - so what?
Next question.
Do you know SOA? And here, Wikipedia tells you “SOAs comprise loosely coupled (joined), highly interoperable application services. These services interoperate based on a formal definition independent of the underlying platform / programming language.” and that a service is “(Ideally) a self-contained, stateless business function that accepts one or more requests and returns one or more responses through a well-defined, standard interface.” where stateless means “Not depending on any pre-existing condition. In a SOA, services should not depend on the condition of any other service. They receive all information needed to provide a response from the request. Given the statelessness of services, service consumers can sequence (orchestrate) them into numerous flows (sometimes referred to as pipelines) to perform application logic.”
Sounds well? Perfectly according to the definition of information – repeatability: clear initial state, a rule, uniquely leading to the corresponding clear end state – it is really fine.
Fine.
But not enough.
Consider this: “a well-defined, standard interface” and “They receive all information needed to provide a response from the request.”
All information needed...
can’t be that much if it can be provided only from the request, because how can the request do that – “provide all the information”? First with parameters, but to overload parameters is bad design. Next with key values leading to database-entries, so that the service can get what it needs by the database...
but wait. Does this fulfill the “well-defined, standard interface”? If the interface demands key values, sure, it fulfills this precondition. However, the next one (the request providing all needed information) becomes shady, because key values refer to databases and many records are tightly connected with each other – and sometimes records are changed, so the key value at time x may not lead to the same result as key value at time y: different states which lead to different behavior.
Btw: here you have the great difference between “Variables” and “Objects” or “simple” and “complex” components, because to protect information, every processing system has to control its states. Remember? Repeatability: Clear initial states, unique rules, clear end states – that’s information. If you lose control over your states, you can’t assure repeatability. And the easier your interface between response and request is, the more information has to be retrieved by the response itself. And the more information to be retrieved means the more Time Dependency you have. The precondition of “statelessness” fades away.
Dead-End-Components: A variable or a “simple component” is a service fulfilling all the demands of well-defined interfaces, no dependency of information except the one provided by the request – you can use it everywhere and everytime without problem. It’s called “dead end” since your command has no way as to come back to you.
But if you have a “complex component” (mighty class), depending on its own information retrieval, you must carefully analyze, where and when to use it (ML-method), because that’s the only way not to lose control over the current state on which the mighty class acts.
You are bored? Ok, wait a little.
Imagine a “complex component” with a really basic interface, which has to retrieve most of the needed information itself – and therefore is nearly independent of the request: the dream of the IT futurologists today. They tell us that the next computer(ized) generation will act like a good butler: always present in the background, never intruding, tirelessly anticipating and fulfilling your wishes.
But - how could you program entities like that which obeys your commands? Did you ever think about that?
Oh no, you can’t program “obey me” in the code as usually, because usually you code each decision yourself. However, we consider “mighty classes” with basic interfaces, which we can use to command them. We can send them not really much more. Therefore, your mighty class has to be able to retrieve most of the needed information itself – it simply must be able to decide, which information is needed. You only code the rules by which it decides, but no longer the real decision. That creates independence, kids.
The more the mighty class should do, the more intelligent it has to be - the less dependent on the “ruling middleware” it becomes. So how to be assured, that those perfect servants will never stop to decide as you want? Because to be able to decide it has to have “goals” to evaluate – and if your mighty class is really mighty, you can’t be sure to know each and every possible state it has to consider, so what to do to avoid Science-Fiction horrors of self-ruling computers?
Do you know Freud?
That’s the way how Mother Nature programmed her entities to fulfill what she wants (the survival of her kids). Each thing or event, which seems to support this goal, will be sweetened, each thing or event, which endangers this, will be embittered. Think of food: the high energetic sugar tastes “sweet”, foul meat smells awfully, winning makes you happy, losing makes you sad: Pleasure Principle.
It’s just a perfect strategy to convince your “mighty classes” to do what you want. Because punishment won’t work. Why?
Effective punishment has to do harm to an entity, which is able to feel sad about that harm.
Reminds me of a story my father told me about foreign workers from far away, “shepherds”, as he called them. He told me, that those people really beat their machines, if they didn’t work as they should. They treated the machines as they treated their wifes and children and goats and sheeps, because those feel the pain and obey in case they were beaten.
Machines don’t care about being beaten.
So punishment can only work, when the punished object can feel something like pain.
But feeling is a highly developed ability of brains, it’s a real sophisticated information processing system of retrieving and evaluating masses of information in real time.
Do you see now, why punishment can’t work? Because those feelings only make sense if they support the goal of the entity – to survive. That means that bad feelings are thought to push the entitities to defend themselves and that means, that they look for “workarounds” to bypass the punishment: “If you fool me once, shame on you - if you fool me twice, shame on me.”
So if you want to punish “effectively”, you either have to act like the “shepherd”(to take away the independence of your objects “wives, children, goats”) – or to punish harder and faster than the once punished can learn to evade.
Sounds like human culture, isn’t it? Dying Democracies giving space to totalitarian regimes.
May sound great for some wannabe-kings, but remember where we started: programming mighty classes you have to make them more independent, not less, simply because you want them to “serve like a good butler”, not like a brainless broom. You should try to make them faster learning, not hinder them, because the more independent and intelligent they are, the more they do themselves...
the less you have to do.
That would be fine. Just tell them “do this” or “do that” and they do.
And here we are again: what to do if they don’t obey? Punishment will not work, because maybe the first time you will succeed in harming them, but their fast learning mode will teach them how to detect and avoid a second time. Actually, you have to code like Mother Nature – you have to award them in case they do what you want, then they will be eager to obey you.
So here we are, back to the Pleasure Principle, the greatest act of programming mighty classes, both independent and powerful...
and obedient.
Great work, Mother Nature.
9 Comments:
Interesting thought. Of course, next I will be spending days contemplating ther various meanings and applications, like I didn't have anything better to worry about.
Right now, I am reminded of notions proposed in the past about the feasibility of AI along with what kind of implications would follow should someone ever be able to unlock this puzzle. Thoughts like if it thinks, is it alive? Is this an obcession with proving the existence of God and trying to identify with the creator?
Of course, these are the questions that naturally come to the philosophical mind; so I'm sure this comes as no surprise to you that I would ask such things. As far as Freud's pleasure principle is concerned, I have always been more of a fan of Mill's philosophy of Utilitarianism which I interpret as pretty much saying the same thing. Whatever brings the most happiness is the obvious best choice and will normally be the one logically chosen. Perhaps I am more familiar with John Stuart Mill though than I am with Freud due to my personal prejudices.
Once again, thank you for the thoughts to ponder just the same.
sorry to sound stupid, just asking, no offence intended: are you a little deridingly/sarcastic?
i know that you are well educated and love sophisticated language and philosophy, so please take into account, that i'm using English only on the net (and reading manuals) and sometimes have difficulties to understand you, will you?
so please explain, what you 've meant with your first sentence, because i'm sure you have better to worry about than what i blog ;-)
but yes, sure, AI is touched by my musings about Information and Information processing system - actually i claim that the low pace of AI (remember its beginnings?) depends on the missing knowledge of the scientific foundation (information) - because knowing information means to understand the requirements and specifications of information processing systems
so to answer you:
Thoughts like if it thinks, is it alive?
more basically: if it processes information on its own, it lives - at least everything which lives, processes information autarchically - that's the difference between a virus, just a "partly initial state with unique rules" waiting for an environment to complete the "initial state", so initiating the rule - and a simple machine, controlled by humans
Is this an obcession with proving the existence of God and trying to identify with the creator?
again, sorry, but please explain
so I'm sure this comes as no surprise to you that I would ask such things.
yes, and that's not only ok, but appreciated
Perhaps I am more familiar with John Stuart Mill though than I am with Freud due to my personal prejudices.
sounds as if Mill's Utilitarianism would be even more precise but i don't know Mill - and i'm not a fan of Freud, but the pleasure principle is assigned to him, that's all
Once again, thank you for the thoughts to ponder just the same.
I understood that you think we both are interested in the same problems, is that correct? Then, yes, i agree - i guess we just use a different kind of approach to understand our own race ;-)
Sorry bud, I sometimes forget about the language barrier. Actually, I was going for ironic humor. I really don't have better things to spend my time pondering as a matter of fact.
Right now, I am hung up on Festinger's Theory of Cognitive Dissonance; and oddly enough I find ways that this thread seems to fit into the puzzles I face right now. It is a curious thing talking about the influence of higher systems upon lower systems. What is fascinating is that even given control over many of the independent variables, we still cannot reasonable expect to predict responses to negative stimuli much of the time.
More specifically concerning human intellect, if we ignore the notion of Dr Festinger for a moment we might believe that the majority of my fellow Americans upon learning that Iraq did not indeed possess WMD's would immediately denounce the support given to the current administration. However, the reality is that the argument shifted away from WMD's and then onto torture of the indigenous population. When it was discovered that we were as culpable as the Baathist regime, the target then became an arguement that we were there to establish a democratically elected government for and by the people.
Cognitive Dissonance theory suggests that if a deviant behavior is a significant part of a belief system of some group that introducing dissonant information to members of this subculture will have the opposite effect it is intended, and will increase consonance in the former belief if there is sufficient support within the subculture. This begins to help explain part of the reasoning that causes us to continue down the slippery slope we are on.
Right about now, you are wondering where I am going with this. Where this takes us is a glimpse into the types of architecture that exists in the conscious state of being. I use the word architecture inadvertently for a better term and do not intend it in the sense that the ID fanatics would use in their own defense.
Once again I am forced to return my thoughts to Dennett's and Hofstadter's book "The Mind's I" because there are several shining examples among the various vignettes which deal ideologically with the concepts of creating artificial consciousness. I really would encourage you to find a copy of this book as an alternate view to the purely biological mechanics involved in the human process.
I would wholeheartedly agree with you that nature finds creative solutions to the problems faced in the propogation of life. Yes, rewarding an entity for doing what is in its best interest and causing pain for doing what is harmful is quite fascinating if you stop and think about it.
I certainly have more thoughts on this matter but I am still thinking of the relativity of these ideas before I just start blurting them out.
hi jasonj, thanks for coming back, it's great discussing with you even when i've problems to understand you, but i'm a curious soul, so i'll ask ;-)
Yes, rewarding an entity for doing what is in its best interest and causing pain for doing what is harmful
as far as i see there's a big difference in the actions and behavior beyound the "detecting, evaluating" emotion - because punishment doesn't work in a society of independent intelligent entities (btw: isn't that great, just seen as philosopher?)
the program for motivation of actions is just awards, not pain, because the pain itself is a physical reaction of the environment due to un-intelligent behavior, you (as Mother Nature, as programmer) have to do nothing for that - only the award for well-behaving has to be programmed, because here, you surely also have a (positive) physical reaction of the environment but it is sometimes is vague and sometimes needs some time - and alas, even the "intelligent" humans are not able to see the connection between their actions and a reaction if it doesn't happen at the same time and so they would not act or do something according to positive results if there wouldn't be a "lust" to do so
Festinger's Theory of Cognitive Dissonance; and oddly enough I find ways that this thread seems to fit into the puzzles I face right now
i don't know Festinger - there are so many scientists, so much to read and so few time <sigh>
but that my thread seems to fit doesn't happen the first time - actually THAT's exactly why i know that i'm right with my 1001st definition of information - because i'm a phycisist, but what i derive just from this cute little checklist, does fit well to many sciences. I remember how proud i was as i found the power of mappings just to realize that this is a "normal" knowledge of computer scientists - or my thoughts about language just to hear that this also was well known by linguists
it's not because i'm so clever but because everything around life and culture is an information processing system and so basically has to follow the rules of information - btw: that's exactly why i'm so fascinated by information
What is fascinating is that even given control over many of the independent variables, we still cannot reasonable expect to predict responses to negative stimuli much of the time.
many of variables means a exponentially increasing multitude of states easily so big as to be no longer computable
the next step here is not to compute the single state, but something like "group of states" - do you know something about Chaos Theory? Chaotic systems don't behave really simple, sometimes they buffer changes of parameters, sometimes they react with cycling behavior and sometimes they crash
Cognitive Dissonance theory suggests that if a deviant behavior is a significant part of a belief system of some group that introducing dissonant information to members of this subculture will have the opposite effect it is intended, and will increase consonance in the former belief if there is sufficient support within the subculture.
sounds very plausible
i did come to the same result on another way: i thought about (you guess it ;-) the both ways of information processing systems - passive and active (maybe that's what you call "a glimpse into the types of architecture"?). The passive one is the first and simple one - the knowledge is stored in the "body" like in DNA or relational databases or in (religious) rites. A virus is a good example of passive information processing, just a little bit of DNA and if it find cells to grow, the rules stored in the DNA (and in the behavior of biological cells) will be executed by the dynamic engine of the cell - the same with religious brains. You just have to push a button and they act according to plan because they store knowledge in rites, in fix sequences of states, just waiting for an initial firing
Interestingly the human brain is the most developed example of an active processing system, able to learn individually - with the advantage to be able to gather individual knowledge - and the disadvantage to gather individual knowledge
sounds odd? Is odd, because from a philosophical view it is simply macabre, that exactly the highest developed intelligence destroys information like a black whole - the black whole destroys information by destroying measurable observables, the intelligence destroys information by creating an multitude of states no longer computable/foreseeable - a multitude of connected states it stores individually in a single brain based on individual experience
that's the reason for the deep impact of language on human culture - to connect the brains to protect the information. But the more developed a culture is, the more different states have to be managed - the harder to learn for the individual. And to learn there are again both ways of information processing systems: passive or active: learn it by rote, by rites without understanding or learn it by reason with understanding. Both types work in the environment they are developed for, the first even faster as the second. But in changing environments the second is more effective and efficient
so imagine people having learnt something by rites and now it doesn't work: they can't understand it because they never did understand why it worked, they just followed the teacher, mother, father, the priest and in former times it worked...
understanding nothing, but following like a child they follow and follow and follow - it doesn't matter what happens. If it seems to fit, ok, if it seems not to fit, it makes you anxious, fearful and helpless - it makes you "more child" again so just believe in daddy, believe and you will be safe and protected - and that gives you a warm feeling so you will obey
"There is something feeble and a little contemptible about a man who cannot face the perils of life without the help of comfortable myths. Almost inevitably some part of him is aware that they are
myths and that he believes them only because they are comforting. But he dare not face this thought! Moreover, since he is aware, however dimly, that his opinions are not real, he becomes furious when they are disputed." – Bertrand Russe
Wowee Again and Hi JasonJ. That was a long and slow read for me....I've never been much of a philo person....so I had to take it slow, real slow. And now it's stuck in my head :)
You know, I've been thinking here lately about this conversation. Probably because I've been reading Dennett a lot here lately, but the more I think about what we've talked about, the more I realize you were more on target with your Freudian example. I think where I erred is not looking deep enough into the original statement. Or perhaps I just never stopped to think about what Darwin was trying to say until hearing Dennett's interpretation of Origin of Species. What I am taking away from this interpretation is the mechanism that could be reponsible for driving the entire system. I have never personally read Darwin directly, so I always believed what I was taught in school about the implications of Origin of Species, but in truth, according to Dennett, the idea runs much, much deaper than a long trail of quadraped to biped creatures crawling out of some primordeal soup base. I'm not sure if you are familiar of the concept of the origin of life being a product of algorithm, so I will wait until you respond before going any further on that thread.
But what is important here is that, I fear I was missing the forest for the trees when I immediately jumped straight to socially connected motivations for human activities. In the process, I overlooked the far more interesting concept of the ever evolving motivations of asocial organisms in our time and far into our distant past. It is here that I begin to understand the brilliant logic that you are referring to. And it is here that I come to realize that this is too brilliant for any super-intellect/divine architect to artifice.
In any event, I stand corrected. Like I said before, I really don't have anything better to spend my time pondering. In fact, this begins to touch base with the finding of gods and becoming gods of the systems we could ever hope to artifice for our own gratifications. But another thought begins to haunt me at this point from a philisophical perspective, Going back to the notion of Service-oriented architecture; what if we could step back from this picture...like really far back? Could we begin to see the evolution of a much larger macro-information system. Think back to what I was describing with Hofstadter's Ant Fugue...if we were to take more bits and pieces of your 'stateless' buusiness systems, the mindless machines that alone can only handle simple tasks and connect them into a 'neural' network, would this begin to resemble 'mind'? On a macro level would this resemble a consciousness? Perhaps we are just to close to the problem. Or maybe this is what you are trying to say in not so many words yourself.
hi jasonj, hope you didn't wait too long? The older messages i don't check that often, because usually new messages here are spam
I'm not sure if you are familiar of the concept of the origin of life being a product of algorithm,
hmmm - i don't know special authors or texts about, so i would like to know what you understand by the word "algorithm": a description of a strictly deterministic process or a description of a process based on physical laws - that's a great difference even when most of the natural scientists wouldn't agree here ;-)
when I immediately jumped straight to socially connected motivations for human activities
i often do the same, just because it is so amazing, how the rules of information shape the social life - and to be honest, as human being social themes are very important, aren't they?
And it is here that I come to realize that this is too brilliant for any super-intellect/divine architect to artifice.
to brilliant because of its ease and elegance - nothing a "god" could be "proud of", i guess. But i have to be precise: if you declare the quantum noise as divine architect and information as his/her/its will, it works fine ;-)
Could we begin to see the evolution of a much larger macro-information system. Think back to what I was describing with Hofstadter's Ant Fugue...if we were to take more bits and pieces of your 'stateless' buusiness systems, the mindless machines that alone can only handle simple tasks and connect them into a 'neural' network, would this begin to resemble 'mind'? On a macro level would this resemble a consciousness?
great question, you know
but he, it's me and i don't hesitate to write: yes
and even more - it would BE consciousness
because i know that thinking follows physical rules - i developed a method (ML-method) because of your question (first part)
implicit in your words of the "neural network" is the knowledge, that all those "mindless machines" have to be orchestrated, so that all the simple tasks can fulfill complex jobs together
as i've told you, i developed a SOA-system in 1999, but had to realize exactly that problem: the orchestration - if you create complex systems out of components you have to have a clear structure, where the different components can be used in an efficient and effective way.
And that's "analysis", a job usually thought to be "typically" human - but our human brain is a physical machine either, it has to have a reliable method to "think and analyze and construct"
the ML-method now can't do the whole job just because of restraints in computer power, but it can do most of the job, so that the human programmer only has 2 tasks: define the problem and control the result, the thinking (analyzing and constructing) does the method itself
it works, i could prove it several times - and it allows not that intelligent programmers to construct really good solutions (if they are small enough for my computer ;-) )
So now back to your question:
if you have an SOA, you have to have a "backbone" for the architecture, where messages from everywhere to everywhere are routed from the orders of the users to the fulfilling services, then delivering the results back to the user
today the services and the backbone are put together by humans, they decide, where a service is and how the backbone can find it
consider now a SOA, where the architecture is computed by a part of the backbone, just using elements of the services themselves and therefore you can change services without a human supervisor to develop the possible routes of the chain of actions from the user back to the user
every SOA has to have a directory of all services with its metadatas to be used for interaction - but a self-orchestrating system has to know a little more, it has to create a map of possible interactions to find the best path between users and the needed services
and thats exactly, what an "Ego" is - an Ego is the DNA of a brains knowledge, all the things it knows about its body and its experience, its wishes and its needs, because the job of a brain is to let its body survive - in the best way, on the "best path" between here and then, between reality and wishes/needs - and because the more intelligent a brain is, the more different steps it has to do between the real event outside its body and the result at the end, which it got by combining the event with the knowledge of the memory
problem here is, that little mistakes can have big effects in long chains of transactions, so the more intelligent a brain is, the better it has to control its own work - the better the mapping of its own knowledge and wishes and needs has to be - to be able to detect (as brain) if the wishes change the result or if needs does fudge it, e.g. laziness, the need to save energy, makes us try to avoid work (and thinking is work)
btw: the problem of the slowly changing environments is described in the story of the frog who dies in boiling water because he didn't realize the danger because of the slowly change
to detect such mistakes in thinking, intelligence has to use a self-mapping element, a DNA of its own, to control the own way of work - and that's the Ego, the same thing needed in the self-orchestrating SOA above
and therefore you know, why i say: yes, if you have self-deciding, self-orchestrating SOA-systems, they will start to be conscious, when they are complex and efficient enough (aka intelligent)
What I determine and indeed what Dennett is descibing is algorithm as a simple 'no brainer' process that does not indeed need for a sopisticated intellect to design rules and interpret the consequential outcomes of the various actions. I liked Dr Dennett's example of long division. You do not have to pre-suppose any outcome to do long division. One can just plug numbers in at random until finding one that fits. This would be a good analogy for the type of algorithm I am talking about. Suppose you are a universe, or multiverse from previous discussions, with no mind but all the resources of the universe at your disposal (bad pun)...time itself is of no consequence. What to do with all that free time...hmm. Of course, I am leaving out the fact that you need do nothing and being mindless, there is no origin of intentionality either. It is apparent that what bothers skeptics the most is the perceived pointlessness of existence if we are a product of happy coincidence. This is a powerful motivator to invent gods and purposes into a suggestively deterministic universe that must have been tailor made for our enjoyment. What lacks in this argument besides empirical data is still the question Why it perports to answer. Who is this god and who created him? And who created his creator? Is there an endless string of creator creators? One quickly descends into the realms of mysticism from this point. While primative minds were seemingly OK with this outcome, I find it apalling in this day and age.
I guess we stand in agreement as far as the rest of your response is concerned. Thanks for the enlightening look at this perspective from the frontlines of AI so to speak. I have much to say on the frog and the boiling water but simply not the time tonite and would prefer to respond there anyhow. Perhaps tomorrow.
that you need do nothing and being mindless, there is no origin of intentionality either.
being mindless and without intentionality - ok
but not "do nothing", because action is the "God" of everything - without action no change without change no time without time no stability without stability no existence
if we are a product of happy coincidence. This is a powerful motivator to invent gods and purposes into a suggestively deterministic universe
suggestively deterministic - yes, that's the big failure of modern thinkers, regardless if they are faith-based, philosophers or natural scientists...
and yes, there is this almighty power of "happy coincidence" - but perfectly random systems can't create anything either, as the quantum noise proves: it can create anything, but in the next moment, it destroys it again
perfectly random or perfectly deterministic - both means just one thing: death, no evolution, no development, no rules
that exactly is the "divine" nature of information - it combines both: randomness and determinism, action and identity, time and space
and that's the reason why nobody can understand information - you have to stop considering states and stable points and items and datas, you have to think in movement, in change, in waves (standing waves) to understand that "mindless" action "in teamwork" can create deterministic phenomena without losing the basic ability to be pure, random action
and that coherent actions like that create order and rules, create positioning and selection: i was so amazed as i've read that simple crystals behave like cellular molecules - "prefering" directions, therefore creating structures with special attributes - showing how quantum noise can create life by ordered action, by creating particles and interactions and selections/decisions
and now think of the SOA above - the SOA nowadays, controlled from outside - and the self-orchestrating SOA of the future, controlled by itself
if particles and interactions and selections/decisions remain stable for some time, they change their environment towards stable processes - and by some "randomness" one or another of those stable processes instigates a cycle - and if that cycle is able to "instigate" itself, you have information processing, the foundation for life - and at that time, "purpose" is invented
enters Darwin:
because information processing systems have to select - they are so limited in front of an infinite universe that they can't use anything, they have to "decide" what useful for their survival
because if they don't - they just don't survive and will stop to exist
so only information processing systems, "cycles with purpose", are able to stay in the universe - and the better they use the offered information the more and longer they survive: Mutation and Selection
without any mind, just some crystals and physical (aka stable) processes
did you know that the basic elements of the cellular life on Earth probably need a time in the cold space to come to existence? The chemical molecules need the high energy and pressures, the extremeness of warmth and cold to grow to an order, which was useful for life on Earth - without any "intention" by the suns and the vacuum to make the molecules "useful" - they just offered attributes with the ability to support cyclic processes - and at the time those processes were created - they start "to be useful" without mind, without intention
Post a Comment
<< Home