11 September 2017

It's important to remember what kind of nonsense AI is

There's a lot of productive work going on with AI.  Whether or not it's enough to justify the cash being shovelled at the problem in economic terms is dubious, but let's stop for a moment.

AI as implemented has the same signal-processing, dendritic, layers-of-habit mechanisms brains do; it won't think better and it may well not think faster in any sort of general case.  (It will handle volume.)  It is absolutely heir to the precise same habitual delusions our brains get into in terms of expecting things to be like what we've already experienced because that's the limit of our imagination of the world.

So it's not actually good for solving problems.  You have to very carefully define the problem you want solved, and a whole lot of human effort has to go into detecting whether or not that's what you've created your monomaniacal savant to accomplish.  This is (relatively) easy with chess; it's pretty hopeless with anything less formalized.  (Inventing a statistical measure of your unconscious expectations is exceeding difficult.)

So why all that cash?

AI supports the delusion of useful control.

There's all sorts of essential control mechanisms in terms of feedback, but the delusion of control is that people in large numbers can be compelled to construct their desires to serve the goals of a small number of exalted persons.  This breaks down at the exalted persons; they can't do it.  AI gives them another reasons to believe they should keep trying.

(It's rather the same with politicians viewing democratic processes as a problem; democracy is a solvent for control.  Given the sharp dichotomy between success and control -- one, or the other, never both, and often neither -- solvents for control are good things.)

3 comments:

Anonymous said...

I mostly agree, but the fact that it works somewhat like our mental processing has some advantages, it's easier to teach it how to do things we can, but either don't want to (like much modern industrial manufacturing and shipping), or get distracted from enough that we screw up in deeply problematic ways (like driving).

We don't need perfection, we just need somewhere between 80% as good as a human (for tasks lacking fatal consequences) and 25% of the rate of serious failures (which automatic cars certain aren't at, and won't be at for a while, but which seems both entirely possible, and also likely within a decade). Combine that with cheaper to maintain than a human and being able to work as long as spare parts and electricity are available, and you have something exceedingly useful in exactly the same sense as all other industrial technology - Artistole's shuttle weaves without a hand to guide it.

Of course, all this assumes current sensible definitions of AI - artificial sentience would be a very different matter, especially since it would almost certainly resemble our sentience a great deal, and would thus likely object to being enslaved.

Anonymous said...

As a side-note, one thing current AI has surprised me with is how many problems can be brute-forced. Without a constructive proof of P=NP, there are clear limits to this approach, but it's still a fairly impressive tool for anything with a large enough database, and where (as I mentioned above) what matters most is an error rate that's notably lower than humans can manage. That fact that this seems useful for everything from language translation to recognizing photographs or picking up oddly shaped objects.

Graydon said...

+heron61 My fundamental dubiousness that there's any such thing as intelligence is creeping in there, to be sure!

I don't mean to say that the stuff being described as AI isn't useful; it certainly is. And you're entirely correct that we don't need perfection to achieve technical utility.

What worries me is that the money and expectation driving all this isn't after the utility of consistency, it's after control. (and is, from the recent revelations about facebook ad buys) quite likely to be getting something like it. Pretty much guaranteed to be a disaster because the ability to generate matching variety isn't there; it's like trying to steer a vehicle with no external sensors.