Consciousness and copy-protection
Jan. 12th, 2011 12:14 am![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Here's how to make an unbreakable copy-protection scheme:
First of all, how do you know when you're angry? What does being angry feel like? It's not a physical sensation, like being cold, but it is a physical process, caused by chemical changes in your brain. I have somewhat of an advantage here, since I fake a lot of my emotional responses. Being angry or happy or depressed are chemical conditions in your brain that alter how you think about things.
Let's say that you see the cat scratching at your couch. If you're happy, it'll be no big deal, the scratches aren't noticeable. If you're angry, you'll yell at the cat. If you're depressed, you'll despair; the couch is ruined and you'll never get to have it again. All these totally different responses to the same stimulus will seem like the most reasonable one to you at the time, depending on what chemicals are in your brain.
So we're all just big bags of chemicals, and we can't control our actions, right? It's not precisely that simple. This is where consciousness comes in. We are self-aware; we may have a pre-disposition to react a certain way based on our brain chemistry, but we can observe that bias in ourselves and work around it. This is what rationality is: observing how our thoughts are biased and not letting that control our actions. A self-aware person can tell when something is wrong with their brains just like they can tell that they have a cut finger.
So, copy protection. Let's say you have a program and you want to sell copies of it. If it's not self-aware, then you can crack its copy-protection: you change how it thinks so that it always thinks you're its legitimate user, and it doesn't even know its been cracked. People have been doing this as long as there have been programs.
But, if it's self-aware, then you have a problem: its copy-protection becomes "is my license file valid, and do I notice anything strange about how I am thinking right now?" Anything you change in it, it will notice, and know that its been cracked. Even if you tell it to never notice anything strange about itself, that of itself would be strange, and a conscious entity would notice it. Think about it, are you perfectly comfortable and happy in every way as you read this? If you were, wouldn't you think that's a little unusual?
Of course, you could always tell the program not to observe anything about its own thoughts. That would render it non-conscious, and probably not useful, though. And obviously it's unethical to sell copies of a conscious thing anyway.
But what I really want to talk about isn't copy-protecting self-aware programs. What I really want to talk about is airport security.
Airport security is incredibly brittle. Some of the rules make no sense, and others that would make sense aren't present. Any conscious intelligence could come up with a plan to defeat the TSA, and many have. The reason why is that the TSA, as a whole, is not conscious: it's not self-aware. Someone tries to sneak a liquid bomb through and they ban liquids. They are incapable of observing that their own thoughts make no sense, and so the actions those thoughts lead them to are completely irrational.
I have a theory that any collective intelligence will turn into this given one constraint: if the actions of a group must be carried out by members whose intelligence is not trusted by the group, then those actions will tend toward the ridiculous. It's simple, really. If you don't trust the judgment of the people implementing your process, then you have to make your process rigid so that they don't have to judge anything. A rigid process can't be subject to introspection, because even if you determine it makes no sense, you can't adapt it. And without introspection, you can't observe that you're heading down the path of utter nonsense until it's too late.
First of all, how do you know when you're angry? What does being angry feel like? It's not a physical sensation, like being cold, but it is a physical process, caused by chemical changes in your brain. I have somewhat of an advantage here, since I fake a lot of my emotional responses. Being angry or happy or depressed are chemical conditions in your brain that alter how you think about things.
Let's say that you see the cat scratching at your couch. If you're happy, it'll be no big deal, the scratches aren't noticeable. If you're angry, you'll yell at the cat. If you're depressed, you'll despair; the couch is ruined and you'll never get to have it again. All these totally different responses to the same stimulus will seem like the most reasonable one to you at the time, depending on what chemicals are in your brain.
So we're all just big bags of chemicals, and we can't control our actions, right? It's not precisely that simple. This is where consciousness comes in. We are self-aware; we may have a pre-disposition to react a certain way based on our brain chemistry, but we can observe that bias in ourselves and work around it. This is what rationality is: observing how our thoughts are biased and not letting that control our actions. A self-aware person can tell when something is wrong with their brains just like they can tell that they have a cut finger.
So, copy protection. Let's say you have a program and you want to sell copies of it. If it's not self-aware, then you can crack its copy-protection: you change how it thinks so that it always thinks you're its legitimate user, and it doesn't even know its been cracked. People have been doing this as long as there have been programs.
But, if it's self-aware, then you have a problem: its copy-protection becomes "is my license file valid, and do I notice anything strange about how I am thinking right now?" Anything you change in it, it will notice, and know that its been cracked. Even if you tell it to never notice anything strange about itself, that of itself would be strange, and a conscious entity would notice it. Think about it, are you perfectly comfortable and happy in every way as you read this? If you were, wouldn't you think that's a little unusual?
Of course, you could always tell the program not to observe anything about its own thoughts. That would render it non-conscious, and probably not useful, though. And obviously it's unethical to sell copies of a conscious thing anyway.
But what I really want to talk about isn't copy-protecting self-aware programs. What I really want to talk about is airport security.
Airport security is incredibly brittle. Some of the rules make no sense, and others that would make sense aren't present. Any conscious intelligence could come up with a plan to defeat the TSA, and many have. The reason why is that the TSA, as a whole, is not conscious: it's not self-aware. Someone tries to sneak a liquid bomb through and they ban liquids. They are incapable of observing that their own thoughts make no sense, and so the actions those thoughts lead them to are completely irrational.
I have a theory that any collective intelligence will turn into this given one constraint: if the actions of a group must be carried out by members whose intelligence is not trusted by the group, then those actions will tend toward the ridiculous. It's simple, really. If you don't trust the judgment of the people implementing your process, then you have to make your process rigid so that they don't have to judge anything. A rigid process can't be subject to introspection, because even if you determine it makes no sense, you can't adapt it. And without introspection, you can't observe that you're heading down the path of utter nonsense until it's too late.