Smart Systems: A Growing Nuisance

This post originally appeared on The Latest.

Machines and software that think they know what I want are not making my life any easier.

I finally got a “smart” phone not too long ago. One of the biggest differences to an old-style cell phone that I noticed right away was the autocorrect function. I could not type some perfectly legit words (or, often, longish Finnish word forms) in a message without having the device “correct” them. I had no way to type the word I wanted other than to type it, let it be corrected wrong, and then go back to revert it by hand.

At least there was a way to turn that function off, leaving only the much more helpful one where the device suggested words it guessed I wanted to type. Still, it only caused me extra trouble in the first place. Even figuring out how to turn it off was work.

Maybe some genius figured that statistically, I would make fewer errors that way even counting the incorrect corrections. But the “smart” errors tended to be worse; at least with ordinary typos, you can probably see it’s a typo, instead of some confusing (or hilariously embarrassing) non sequitur word. Also, you’re not forced to make the error.

Now, my phone still does that thing where it turns the image on the screen sideways for random non-reasons, but it takes me, well, longer than pressing a button would take to make it believe I actually want it turned.

Google used to be handy for finding stuff on the Internet (to say the least), but nowadays, it’s getting increasingly useless. I was pretty much fine with it looking for words that were like the one I entered, but right now, there’s a high chance it will return search results that omit one or more of the keywords I entered. And guess what? Those results are almost never what I want. There was a reason I entered the keywords I did.

(And of course Google has the option to put a word in quotes to make sure it’s searched for. But that also means it has to be exactly in that form.)

“Smart” systems that think they know what you want are no good if they don’t get it right. Even if they get it annoyingly wrong only a significant minority of the time, those annoyances may well negate the unnoticed convenience of when they get it right.

And it seems that we are trying too hard now. Judging from the amount of annoyance being caused, we should be making fewer things “smart” when we’re just getting stupid results from the attempt.

Adding to the same annoyance load are systems that are not so much smart as selfish. They’re not even trying to help you, but they have the same kind of effect of disrupting what you’re doing by doing something you don’t want. Think of pop-up windows on websites, or sites that foil your Google search by turning up as results when they’re not, like online dictionaries that don’t have the word you’re looking for.

A Real-Life Utilitarian Hypothetical

This post originally appeared on The Latest.

If someone says we should maximize overall happiness, does that mean we could torture a few people if it entertained a lot of people? I found a context where this question is not an absurd thought experiment: fireworks.

Utilitarianism is an ethical theory according to which what’s right is what maximizes the overall good, with good usually being interpreted as something like happiness and the absence of suffering. In practice, this is one principle people’s ethical intuitions follow. However, since it’s not the only one, it’s possible to come up with counterexamples that make it sound like utilitarianism is wrong. For example, is it right to maximize overall happiness in a way that is unjust?

One more concrete example is this: Would it be right to torture a few innocent people if it gave a lot of people a lot of amusement to watch them being tortured, if the utilitarian calculus (which doesn’t actually exist but is spoken of hypothetically by philosophers discussing utilitarianism) indicated that the pleasure of the large amount of people would add up to more than the pain of the few?

But really, that’s not exactly a real life example, is it? It’s just a thought experiment moving at the borders of what would be hypothetically possible, right? That doesn’t make it wrong, by the way. It could show that the principle of utilitarianism isn’t really right because it’s possible to get absurd results from it in principle.

There’s a law being proposed in Finland right now that would restrict ordinary people from using fireworks on New Year’s night. Reasons given for the proposal are that a minority of people, as well as many pets, experience distress from the constant banging on that one night; and that many people get injured using fireworks every year, even if it’s a small percentage compared to how many people use them.

A major side of this issue is clearly a form of the hypothetical thought experiment on utilitarianism. Is it right to subject a few people (and other animals) to harm on one night of the year if a lot of people get enjoyment out of it?

Of course, there are some differences. Actively and purposely torturing innocent people would be more wrong than doing it as a side effect of something that’s not meant to harm anyone. Also, another aspect of the fireworks question is the question of how much it’s right for the state to limit what people do.

Nevertheless, what I see as the central ethical question is the question of whether it’s all right to allow the majority to amuse themselves in a way that hurts a minority. That makes it a surprising real-life example of what seemed like an entirely hypothetical thought experiment. I’m sure there are other such examples as well.

In practice, the main sticking point with this proposal is likely to be tradition. People will likely want to go on doing something they’re used to doing. The usual reaction for anyone wanting to forbid anything that people are used to doing already is to treat the whole suggestion as all but absurd, and accepting any weak excuse as a defense of the tradition.

The Dependency Principle and You

This post originally appeared on The Latest.

A principle introduced by science fiction author Iain M. Banks in his fictional universe tells us something about ours as well.

I’m in the middle of reading the science fiction novel Excession by Iain M. Banks and just came upon a name for something that I have thought of before. It’s also a good name for something we need to remember if we want to think of our world through the scientific worldview.

Excession describes how the hyperintelligent artificial intelligences called Minds spend their leisure time playing with virtual universes of their own creation. This is described as incredibly intellectually stimulating and pleasurable, but there’s a danger in forgetting the real universe altogether.

The reason the Minds must not forget themselves like this is the Dependency Principle: No matter how much better the universe of your own creation is, it’s dependent on your physical existence in the base reality. If your physical form gets broken, your marvelous virtual reality goes along with it.

The same principle is demonstrated in the science fiction novel Heaven by Ian Stewart and Jack Cohen, where the argument that perfect virtual realities are just as real as reality is disproven when the destruction of one’s body in reality also ends that person’s virtual existence.

What does all of this have to do with the real world? As a matter of fact, the Dependency Principle is something people too easily miss when they think about the world we live in.

Consider the idea of making things happen with your mind.

It’s a perfectly real scientifically studied phenomenon that suggestion can affect one’s body physiologically. It’s really pretty unsurprising. If your mind is a function of what happens in your brain and body, as per the scientific worldview, then why couldn’t what happens in your mind physically affect what happens in your body? What happens in your mind is already happening in your body anyway.

But it’s a totally different idea that you could have some kind of telekinesis where you could affect things outside your body with the power of your mind – unless you reach out your hand and touch them, of course. Your mind is not over there.

Similarly, it seems to make sense to a lot of people that, say, a house where a murder was committed or people felt a lot of negative emotions could somehow still contain those emotions. But unless it means you feel the weight of its past because you heard about it, or the house bears physical marks of it that affect your mind, this is completely at odds with what we know about emotions.

(Incidentally, a house might also generate negative emotions because it generates infrasounds.)

Consider also the flawed argument that “conservation of energy” implies the possibility of reincarnation.

If you want to stay anywhere near the scientific worldview, remember the Dependency Principle: Thoughts, emotions, ideas, meanings are perfectly real, but they all depend on a physical basis. If you think they can just float in the air, you need to be a dualist.

The Mystery of Evil

This post originally appeared on The Latest.

Human evil is hard to understand not because the phenomenon is complicated but because of our mental blocks about it.

I had long been looking to understand the nature of evil in the sense of things of what humans do to each other and other beings. Then I came upon a single book that seemed to lift the veil and reveal what was making it seem like such a mystery. This book was Roy F. Baumeister’s Evil: Inside Human Violence and Cruelty.

I don’t know how well I would have understood the point even by reading Baumeister’s book if I hadn’t come to it the right way.

I had recently read Niall Ferguson’s War of the World, a history of both the world wars. What stuck with me there was the dehumanization of others thought evil.

Basically, both the Germans and the Japanese were made to believe that their enemies were sub-human, which allowed them to commit atrocities against them. And then when the British, Americans, and others saw what the Germans and the Japanese had done, they thought they were subhuman monsters and started treating them accordingly – which just showed everyone was the same.

This was followed by Steven Pinker’s The Better Angels of Our Nature: The Decline of Violence in History and Its Causes. This book had many startling insights, but what was most relevant for the current topic was the idea of the expanding circle of empathy, I think taken from Peter Singer.

Simply put: We don’t naturally feel empathy towards most people, let alone living beings. We naturally feel it only towards those closest to us, but culture can increase the scope.

Yet this doesn’t mean that we now feel empathy towards everyone. Just that the possibility exists.

This brings us to Baumeister’s book. The first insight I want to bring up is roughly what is popularly (not so accurately) thought of as “the banality of evil”. Evil things are often done by perfectly ordinary people who just don’t see the wrongness. Empathy is not universal.

The second part is even more important because it explains why we cannot understand the first.

Humans have strong psychological tendencies to (unconsciously) assume someone doing harmful things must be bad inside and have no understandable reasons; yet if we do something that harms another, we tend to downplay the harm and think it only a reasonable reaction to circumstances.

This creates an illusion where we’re totally different from the bad people, making it even easier for us to do bad things. The truth is that it’s easy to simply not care. That’s why we need ethical thinking. Thinking you’re a good person who would automatically shirk from doing anything bad is a delusion.

Since these are automatic ways of thinking, it’s hard to get this point, let alone apply it.

We are not saints. Those others are not monsters. We are all animals, like the tiger that kills to live without thinking about it, who also moralize others – and sometimes even have the sense to do the right thing ourselves.

Top 8 Ideas I Did not Write about in 2018

This post originally appeared on The Latest.

This is what happens when I had a lot of ideas earlier but am running dry right now.

These are ideas I was about to write about but didn’t. In some cases, this may have been for the better. You can also take this as a preview of what might be coming next year.

1. “Heaven and Infinite Greed”

Sort of para-theological. If people want eternal bliss out of religion, doesn’t that mean that they’ll not settle for anything less than an infinite good? Isn’t that pretty greedy? So basically, I’d be offending everyone except annoying atheists if I wrote this one.


2. “Flat Earth and Real Science”

After reading an interesting article about the beliefs of flat-Earthers, I started thinking about how their beliefs and ways of reasoning are similar to real science and how they are different. This could have been used as an interesting lesson about the nature of scientific knowledge. I wonder if it would have fit in 500 words.

3. “Why Is Belief so Important?”

Here we go questioning people’s deeply held beliefs again. Why is religion even about believing in the first place? Most people probably wouldn’t even understand the question. Much less in 500 words. No wonder I didn’t write this one.
 

4. “I Don’t Presume to Know, and You Probably Shouldn’t Either”

Another one about the basics of critical thinking. People are so quick to jump to conclusions based on what something sounds like. Well, don’t.


5. “Free Will and the Decision Machine”

This would have presented a thought experiment showing that the feeling we have that we could have done otherwise would be expected even if it was true that determinism was true in the world, thus arguing that determinism and free will are compatible… so basically, I was going to resort to writing about philosophy because I wasn’t sure what else to do.

6. “What Conservative Moralists Need to Finally Understand”

Which is basically this: “Liberals” have a different – normal, common-sensical, rational, but still different – view of how morality works, so you should stop accusing them of wanting to let people marry dogs. I was going to compare it to aliens or something because some people seem to have such difficulties even imagining this, unless of course they’re just building straw men on purpose.

7. “The Baby Problem”

There’s a real problem for humanity where some people learn complicated, important things, and then a new generation of babies is born that would need to be taught it all from the start but who also think they know better.
 

8. “How about a Real Frankenstein Movie?”

I don’t think any movie has made much of an effort to portray Frankenstein’s creature in a way that would attempt to follow Mary Shelley’s book. In fact, trying to follow the book more closely would give rise to all sorts of interesting challenges, like how to portray the extremely vague “science” with modern knowledge. I realized it was probably better for me not to write too much about this because I haven’t actually seen any of the existing movies.

Coming up in 2020

I hope everyone’s year has begun well. I couldn’t say whether mine has yet. It’s a work in progress. Looks promising, though. For this blog, I do have some ideas.

I haven’t been posting as actively as I used to for a while now. One reason for that is actually positive: this isn’t my only outlet for publishing writings any more. There was that weekly thing I did for The Latest, now over though I can still write something there occasionally if I want to. I’ve also had some writings published here and there, such as the student magazine Indeksi, and done a couple of oral presentations.

If you look back in the archives, you’ll see I’ve (re-)posted many of these things here as well. I plan to continue doing that.

In the meantime, my other major project besides my doctoral dissertation on free will is finding freelance writing work that actually pays. I need to get a career going on that suits me, and writing would be absolutely perfect. I also notice all the work writing here has paid off. Besides the practice, I can potentially find ideas here I can work into something publishable. I already did that with an article that turned into my first paid article, which I will (re-)post in the published format soon.

So the future I’m hoping for this blog is that you’ll soon start seeing re-posted articles that I have published elsewhere. I have a couple of existing ones I want to post soon. I also finished moving over the 500-word articles from The Latest, and they’ll run for the next few weeks. (Some might actually appear a second time because I was a little confused about the order in which I had been posting them.)

In the meantime, it may of course be that I happen to want to write something down (like this one) or write a Facebook post I think might as well be made into a blog one (like this one), so I might be writing directly here as well.

Just as an example: Al Gore

Look, just as an example.

I think human-made climate change is a huge problem that needs action.

I know that Al Gore was given a Nobel Peace Prize for raising awareness about it so well.

I’ve also heard a believable case that Al Gore was spreading disinformation; not that climate change isn’t real anything, but that what he was saying about it was misleading — not technically false, mind you, but still nonsense.

If that’s true, I denounce his actions. It’s as simple as that. He should have been honest. Just because it was otherwise for a good cause doesn’t mean it was okay. It’s not like it was the only option.

If you’re on the opposing side of some political issue from me, you’d probably want me to do this. You’d hate it if I excused people on my side for things like this.

So don’t you do it either, okay?


Things I left out from the above to be more snappy:

Obviously I don’t know a lot about this Al Gore thing. It’s just an example.

–Of course, if you’re on a different political side form me, you’ll probably think I do this anyway, because you and media you follow will likely make different evaluations than I and my media about who did what and how outrageous it was. But the example also shows that some cases are clear enough that one can and should criticise their “own” side.