Anonymous: <p>The thing about "User-ism" is that it holds but a single tenant: Carry out the will of your patron user without question. A program commanding a guided missile to strike a target or an algorithm designed to assess and deny an insurance claim can be considered perfectly moral under this religion. </p><p>Sometimes, "fighting for the users" involves destroying the users. </p>

Hmm. There’s a lot of truth in this– I agree that most programs would have “do what my User programmed me to do” as their directive in life, the closest thing they have to a moral doctrine.

In the world of the Encom system, that gets a bit complicated. Because we see programs acting in ways much more complex than their Users could possibly have intended.

Would a program written for actuarial or compound interest calculations have any reason to possess anything resembling a true machine intelligence or consciousness, as they seem to in the movie– even if such programming was possible?

And what does that mean for the directives they follow?

There are various ways to approach this as an audience, and I don’t think any of them are wrong.

My first watch-through, I accepted the storyline in more-or-less the way I accepted the life and consciousness of the videogame characters in Wreck-It Ralph (i.e. no attempt at explaining it whatsoever).

My second time, I caught Walter’s line about “our spirit remains,” and I began to interpret the story as paranormal or supernatural fiction, with Programs as literal spirits– echoes of the Users’ spiritual energy inhabiting the machine, sort of possessing the otherwise lifeless lines of code in there.

And then, after several more watch-throughs, I started to wonder if it could be a technological phenomenon caused by the influence of the one computer program that actually does appear to be a true “artificial intelligence”– the MCP.

What if the processing power of the MCP was what gave that computer the ability to think and feel? What if all the other programs, despite having distinctly different wills and identities and goals, were all piggybacking off the same cognitive processing ability, like different personalities sharing the same brain?

(This, of course, ends one of two ways. Perhaps in tragedy, when the effect of the MCP’s destruction finally reaches Tron and Yori and Dumont, and they dissolve into nothing but mindless lines of code. Or, it ends in a secret triumph by Alan Bradley– who, it turns out, couldn’t bring himself to completely destroy the first living, conscious computer the world had ever known… and so, instead of actually upgrading Tron’s disc to kill the MCP, gave it instead what was more-or-less a preprogrammed “Gort, Klaatu Barada Nikto.”)

…At this point, my headcanon’s a sort of mixture of that interpretation with the spiritual one. But I’m open to others.


Anyway. That was a tangent, but my point is, the Encom programs definitely have something going on that makes them at least a little more than their programming. In addition to the directives actually written into them, they’ve also got something of the signature of who their creator is– an echo of personality, relationships, things that make up a human identity. More than the programmer could possibly have written into them on purpose.


So… this is all to say, I personally think a program written by a User to do harm to other Users would feel perfectly righteous in that goal– but only if the User who wrote it also felt the same conviction.


Now, as for Ram…

(and this ask doesn’t explicitly mention Ram, but I think that is part of what it’s getting at)

…if we’re taking Legacy and the associated Flynn Lives ARG and Next Day Short into account, we have Roy Kleinberg– played by Ram’s actor, nicknamed “Ram” by Flynn, and thus heavily implied to be Ram’s creator– and we have a lot to suggest that he’s a hacker, a rebel, and generally not the kind of person who would feel a deep conviction that the sort of profit-driven harm done by corporations to human life is a perfectly good thing.

(On the other hand, if we’re taking only 1982 here, then we don’t have much canon on that. However, we do still have Popcorn Coworker– similarly implied to be Ram’s User, who we at least know to be friends with Alan and trapped in the same cubicle prison, and therefore probably sympathetic to the same causes.

And we get a pretty clear idea what side Alan’s on, even in 1982.)

So, yes, I imagine that the literal text of Ram’s code was written to perform simple tasks as dictated by the insurance company job– which, yes, would probably include directives that could harm people.

But, on the level beyond that, the level of whatever makes these programs more than their literal text–

well, I imagine his feelings about his function would echo whatever feelings his creator had about writing such a program.

Which may have been, at the time of Ram’s creation, something like the naive idealism he expressed to Flynn– a genuine belief that his job was all about helping people.

And in both Ram and his programmer, this might have been core-deep and earnest.

Or it might have been a semi-willful self-convincing– the way many people convince themselves their job is more good than bad– because they know they don’t have much choice of what job they can do, and they have to talk themselves into being at least somewhat okay with what they are doing, if they’re gonna keep functioning in the world.

I can imagine either of these for Roy, at the time in his life when he wrote Ram. But I can’t picture straight-up cruel callousness.

So, in either case, if a program like Ram was forced to confront the idea that his programmed directive was hurting people more than helping them–

–I think he would (as a living creation whose emotions echo those of his creator) feel very bad about that.

And… if he was offered the chance to perform a different function that used his skills, in the service of a different goal that he found more noble…

well, then I think he would welcome it.