Saturday, August 9, 2008

Political...Scientists? Teshale's Comment

I agree with Stanek's point that science and politics are two vastly different animals, like unicorns and sharks, or pandas and Bill O'Reilly. However, I do not believe bureaucrats are the answer to bridging the gap. Such a union, in the hands of an expert, might well integrate seamlessly, like a mermaid where the human bit is the politics and the fish bit is the science. But is it not more likely that one will end up with some horrifying, Moreau-esque creation? Consider, rather, bureaucrats may be the cause of the gap. Their jobs could well depend on keeping people apart-- if this is achieved, they can act as the middleman. Most bureauacracies, in fact, seem to devolve into elephantine mazes of rules and regulations, where nothing is really feasible without the proper forms (as noted in my film Brazil [1985]).

I suppose I take umbrage with Stanek's usage of the word "bureaucrat," which in my eyes takes on a rather negative connotation. Max Weber defined the bureaucrat's task as the following: "Bureaucratic control is the use of rules, regulations, and formal authority to guide performance. It includes such things as budgets, statistical reports, and performance appraisals to regulate behavior and results."
A bureaucrat's task is to make sure a political body runs smoothly and efficiently, and not necessarily to determine any particular policy, but to ensure that policy is implemented well. I would propose that Stanek's idea is a sound one, but the task should fall to someone rather more dynamic, someone unbound by these regulations, whose office, intelligence and skills allows him or her to seamlessly transition between these two jobs.

This task, l propose, would be best accomplished by Enlightened absolutism. Some call it "benevolent despotism" but this is negative thinking. Since people are, of course, human-- with all the awkward emotions being human entails-- I suggest a giant supercomputer could be the answer to this issue. Giant supercomputers are nothing if not rational, and instilling a set of rules ala Asimov's Three Laws should ensure the computer does not do anything awkward, like become sentient. A drawback to this suggestion is the possibility that giant supercomputers always seem to develop megalomanaical tendencies, but-- to be fair-- it's not as if humans never develop these tendencies either. With a giant computer controlling policy and society, we could a) be sure that this computer would have society's interests at heart, b) employ the men and women stuck in paper-pushing jobs to better effect, and c) allow, in the resulting collapse of society, for an entertaining and exciting post-apocalyptic world where a solitary man (or woman) can battle against evil robots to save the world. Apocalypses are very in right now, as the success of my novel The Road (2006) can attest. I believe in giving the people what they want, and my proposal does so, while addressing the issue of problematic influences one way or the other between politics and science.



(Edited. Er, for one word.)

5 comments:

stanek said...

Salom, the giant sentient megalomaniacal policy-making supercomputer you speak of IS bureaucracy! It's just lower-tech.

Anonymous said...

salom have you ever seen I Robot? do u kno what happens in that movie salom? robots rebel and start killing people. yes the Asimov Rules were instituted, or at least something like them, and yet the robots still found a way to do harm to their human masters, ultimately forming an all knowing, all controlling sentient master computer, that only will smith could destroy because of some ridiculous plot twist. my point is, does anyone really want an apocalyptic result from this new bureaucrat? i know one thing. as sure as Bill O'Reily is NOT a panda, i dont want to meet my doom at the hands of some metal monster.

i much prefer the maker of my doom to be a living, breathing, thinking, feeling, loving, hating, morally conflicted yet frighteningly comical, human.

Anonymous said...

Stanek: But the computer would *not* be sentient, this is key. If it becomes sentient, then we will know that there's no point, and we might as well pull the plug and continue being ruled by people.
Wait, do you mean the computer is a bureaucracy, in that it encompasses all the roles of bureaucracy in one package? If so: oooh. You ARE good. I cede the point.

Bradan: The newly manufactured robots were changed to not operate according to the Three Laws, right? V.I.K.I developed a faulty interpretation of the Three Laws and told them to use it. In any case, the whole scenario becomes rather moot if the computer gains sentience, which should *not* happen. I was attempting to say that if the argument against having giant supercomputers is they always turn evil and kill people, this argument doesn't hold water for me, because people usually do the exact same thing. I suppose it all comes down to preferences, as you said.

Also, I think you underestimate America's desire for new and interesting lifestyles. ;-)

Also also, I don't *actually* want to be ruled by a computer. I never said my argument was a sensible one!

-Teshale

Anonymous said...

just to quote you...

"I suggest a giant sentient supercomputer could be the answer to this issue."

"instilling a set of rules ala Asimov's Three Laws should ensure the computer does not do anything awkward, like become sentient..."

so do we want a sentient computer, or no? i think first we need to define sentient.

though i totally agree with point 'c' :)

Teshle said...

My apologies, Dan, there should not be a "sentient" there. It should just be "giant supercomputer." Shall edit now.

-Teshale