http://www.nature.com/nature/journal/v443/n7111/full/443502a.html
10/9/2006 9:43:29 AM
This may prove useful, but most economic theory works out in practice because people are, on average, quite rational.Any individual person may be irrational about a few specific decisions, but there really isn't widespread irrationality when you look at an entire group of people. This is where terms in economics like "The Wisdom of the Crowds" come in. Even when people do things to their own detriment, it is generally because they've followed a course of what they feel is rational behavior and this is the best choice at the time. Economics talks about this a lot too (see: "The Madness of the Crowds").I think truly irrational decisions are rare. Even the choice to pollute the globe can be a rational choice based on selfishness. Not saying it means it is a good choice, but it can be rational without being right.
10/9/2006 9:55:09 AM
The article and this article about that article ( http://arstechnica.com/journals/science.ars/2006/10/9/5541 ) seems to imply though that the classical economic models are often wrong:
10/9/2006 10:16:42 AM
I call "So what?"Even if people are ridiculously irrational all the time, it doesn't change much of anything: telling people they are irrational is not going to make them act any differently. And depriving them of the right to make their own decisions will not fix the problem either: the decisions are still being made by demonstrably irrational people.
10/9/2006 10:43:54 AM
It's not that the people react irrationally, on the contrary, they act very rationally. What economics does is simplifies the situation. It's obvious that it is an imprecise science of guessing, much like meteorology or statistics. What these sciences do is attempt to accurately model a very complex environment in very simple measure.What we have is a situation where people do not neccesarily act in their own advantage, but they tend to. Economics simply assumes we always do. What is actually taking place here is that we have billions of people all interacting with each other and everything around them each of these interactions effecting what later decisions they will make. As one would imagine this is nearly impossible to count and quantify individually, this is why we have economics, so we can make some kind of guess on how this system functions, and how it might function later, but it is still a guess, and often, not a good one.
10/9/2006 12:24:17 PM
10/9/2006 1:02:24 PM
Things like this are the reasons why I'm hesitant to use much game theory in my work as a business and strategy consultant; it often makes too many assumptions that competing companies or "opponents" will act intelligently and arrive at the same conclusions as the game theoretical model. Even if people act rationally, they quite often are stupid.
10/9/2006 1:30:08 PM
Actually game theory has a pretty good level of predicting power.I do agree with most of what both Kris and Loneshark are saying.moron, its kind of impossible to make the predictions any better if the only thing you're adding to the equation is "well people act irrationally, so you can't predict their behavior"This will only serve to add a measurable uncertainty in the predictions.If you say they act irrationally, but predictably, the you may have something there. Unfortunately we can't say that for people in most situations.
10/9/2006 3:06:25 PM
<n/m>[Edited on October 9, 2006 at 3:10 PM. Reason : .,.]
10/9/2006 3:09:33 PM
These guys, and there kind of thinking, are the first steps to adding more than "people will act irrationally."Personally, I think we will have AI computing someday (kind of like what you see in Star Trek), and one component of this is figuring out why people do the things they do. This type of research will eventually lead some scientist somewhere to be able to develop a program to mimic our irrationality/decision making progress. If more people think along these lines, it could be possible to develop models more advanced than the Bayesian stuff they probably use now (i'm guessing).
10/9/2006 3:11:28 PM
Moron, Wow. In short, No. The reason we do not currently AI as in StarTrek is purely technical; our computers are purely deterministic, so they cannot know or do anything we do not program them how to do. Until we start making computers out of neural networks, whenever you hold a conversation with a computer you are really talking to the programmer, just through an intermediary. When we do start using neural networks then the cleverness of the programmer will fade to be replaced by the personality of the computer, sure enough. But the computer will just be a retarded human hooked up to a vast memory bank. It will be impressive to us, but try teaching it to play a game it has never heard of before. It won't be until we can build neural networks with a similar density to human brain tissue that we'll start to see fast thinking brilliant machines. It has nothing to do with simulating irrational thought patterns because merely simulating intelligence is not A.I., when we truely invent A.I. we will not need to have it simulate irrationality, it will be irrational all by itself at start-up. [Edited on October 9, 2006 at 4:21 PM. Reason : .,.]
10/9/2006 4:15:14 PM
You notice I did say "someday" it would take for more than a single post to fully expand on the issue of AI.But, we have to have some good software to go along with that neural net hardware. Especially if we want to bypass the N-year learning process of a human.[Edited on October 9, 2006 at 4:37 PM. Reason : ]
10/9/2006 4:37:09 PM
10/9/2006 4:41:10 PM
I'm pretty sure humans are probabalistic. Our biology is chemical and chemistry is quantum-physics based, and quantum physics is probability-based (everything has some amount of probability, no matter how small).But, probabalistic systems can be reasonably well modeled, it's just that with something like our brain, it's chaotic, which is how our computers are limited. The easiest way past this (and likely the only way) is some vast neural-net of processors modelling each probabalistic process. But, if you take notice, computers are becoming increasinly more vectorized (look at Sun's Niagra, or the Cell, or Intel's recent 500-core prototype (or some ridiculous number)). A big problem I forsee them having is interconnect bandwidth and communications algorithms, but even this isn't a insurmountable barrier. There was recent news of "teleportation" which was quantum teleportation, which could pave the way for very fast interconnect technology. And Intel also has their laser-on-a-chip which is a good intermediary interconnect until we figure out quantum. All this stuff might not happen until we're all dead, but the piece are in place for it to find fruition maybe in the next 100 years.
10/9/2006 4:50:05 PM
Moron, when I say today's computers are deterministic what I mean is always. Never, not ever, will you sit down to type an E-Mail and the computer will say "I'm in a bad mood, not now" (barring human error in design and manufacture). As for Kris and Google, think about it for a second. What is the computer doing? It is taking a mountain of data and analysing it to produce results. It is impressive, it is very clever, but it was programmed to do that. If I keep the data the same then every single time I run the search I will get the exact same results according to the Google formula. The Google formula has variables that change and it is very data dependent. But the people at Google did not say "Good morning G-Engine, use the best search algorithm you can come up with, use your judgement." Hell no, it was programmed by a very clever human to apply a given algorithm based upon a complex array of fixed conditionals. Of course, I don't know if non-Engineers can understand what I am talking about. How adept at computer programming are the two of you, Moron and Kris? A sentient creature is non-deterministic, in programming jargon it means our brains violate state while executing. In other-words, executing instructions (living in our case) does more than change the values in memory, it changes the processor. Your brain no longer responds to stimuli the same way it did an hour ago, it may never respond the same way again. In an hour you may give a completely different answer to the same question. True AI will act the same way whenever it is developed. This is because we are all learning machines, we were not programmed, at best we programmed ourselves and will continue doing so until we suffer brain damage. This makes it very difficult for others to program us, they have no idea how their code will decompose or what code we will interject ourselves later (to continue pushing the computer analogy way too far, neural networks work nothing like our PCs).[Edited on October 9, 2006 at 9:31 PM. Reason : .,.]
10/9/2006 9:23:08 PM
10/9/2006 10:31:29 PM
10/10/2006 12:28:57 AM
10/10/2006 1:34:50 AM
10/10/2006 2:05:57 AM
10/10/2006 9:45:54 AM
psst, you're arguing with a starry eyed commu-socialist
10/10/2006 11:36:38 AM
I really wish I knew more about computer programming so I could get in on this debate. Sucks hard to have a debate about economics where I can't get a word in because I don't know what I'm talking about.I could always troll it I guess.
10/10/2006 11:40:03 AM
10/10/2006 12:15:32 PM
Right right, I know, but without the analogies to computer programming, I really can't even compete philosophically on this.
10/10/2006 12:18:25 PM
10/10/2006 1:51:20 PM
10/10/2006 5:16:29 PM
^^There's no point in discussing it with him because he has a viewpoint that is different than yours?
10/10/2006 5:19:29 PM
10/10/2006 11:42:51 PM
you're not a computer, how can you say that at a complex enough level their perception of reality would not be subjective?
10/11/2006 12:00:59 AM
Bakunin, right, because I prefer to debate facts, and Kris is chock full of false facts that I enjoy catching him on. I have enough trouble convincing Kris on subjects where abject truth exists and I have the data to prove it. But this topic is pure opinion, it is an article of faith no matter which way you swing. One cannot prove that humans are fundamentally different from toaster ovens. So why should I bother trying?
10/11/2006 9:01:25 AM
10/11/2006 11:06:36 AM
But your definition is kind of hard to determine. Honestly, I can't tell if another person has a subjective reality, much less a computer. If that is your definition, then the only person that we can definately define as human is ourselves individually.
10/11/2006 11:33:00 AM
If you can define what makes your own subjective reality, we'd be in business.The biggest problem is, it's the closest thing to our existence but the furthest away as far as definition is concerned.
10/11/2006 11:34:45 AM
I have a subjective reality, it's the only thing I can truely be certain of. However no one else can ever really be certain of it.
10/11/2006 12:31:39 PM
You're right. But until you figure out how it works, what exactly causes it to be (what makes "you" where you are, and not where I am?), it makes little sense to compare us to computers and try to recreate ourselves on them.I mean, how the fuck do you engineer something when you're ignorant of the requirements?
10/11/2006 12:38:31 PM
I'm saying those aren't the requirements. We can't even consider other people intellegent by those requirements. I am able to say that some person other than myself is intellgent, correct? Yet I am not able to say that they have a subjective reality, so this cannot be the requirement for intellegence.
10/11/2006 1:51:35 PM
I wasn't so much addressing this thread as your ridiculous, tacit assumption that human = computer, and really really complex computers will suddenly become like us.There's more to it than that. I'm not claiming it's some non-physical soul (I don't believe this), but cognitive science researchers who take your approach to the problem are the reason why we're not going to arrive at a true piece of artificial intelligence for a long, long time.
10/13/2006 5:20:42 PM
Who cares if the computer experiences a subjective reality? If its function approximates intelligence within my subjective reality, then I will consider it AI. It'd be better if it didn't really, then we wouldn't feel so bad for treating it as property. Why's the damn thing got to have a soul?It's not like human intelligence was engineered, it is the result of billions of years of chaotic selection. The lessons that could be learned by treating DNA as an algorithm to emulate would be uselessly complex.I do not think it is possible for humans to fully understand their own cognition and I do not think it is necessary for humans to emulate it.[Edited on October 13, 2006 at 6:18 PM. Reason : *]
10/13/2006 6:07:19 PM
^ Well uh, otherwise "AI" is just a fancy pants name for "computer science." Intelligence really suggests that the machine is experiencing the same sorts of things we are, not just behaving the same way.
10/13/2006 7:31:59 PM
the only reality you can percieve is your own, there is no way to know anyone else percieves what they claim to percieve, etc etcso in that respect you can't differentiate within the system, between functional equivalence and perceptual equivalencedamn I'm tripping balls yo
10/14/2006 9:30:24 PM
10/14/2006 9:39:50 PM
*[Edited on October 14, 2006 at 9:47 PM. Reason : *]
10/14/2006 9:46:44 PM