Monday, August 17, 2015

Thoughts on EA and AI

Did I write about the EA kerfuffle yet? No? Great! I have something easy to cover today!

I signed up for EA Origins yesterday to try to buy the DLC for Mass Effect, and it turns out it’s not available anymore… they want me to buy the deluxe edition of the whole trilogy instead, even though I already own two of them…

Wait, never mind. Wrong EA.

So there’s this video game publisher, Electronic Arts, which is generally terrible and makes terrible things. They buy up other, better companies, and turn the whole product line to crap.

But there’s another EA, Effective Altruism. It’s a sort of movement thing that has been growing for a few years or thereabouts, and I found out about it when I inadvertently wound up in a party of theirs last December. Awkward! I tried to pretend I knew what they were all talking about, luckily I had read on of their foundational texts just a few months before, but I don’t think they were fooled.



Effective Altruism is historically about finding those charities which can do the most good. The easiest to measure - not that it’s at all simple - is in saving lives. The most cost-effective ways to save lives tend to involve investing in basic sanitation and medical care in very impoverished countries.

EA can branch out, though, into reducing suffering, or helping animals, or such things. You can track donations other than money… it turns out each blood donation doesn’t actually save three lives, though I don’t know if anyone was ever fooled by that sort of advertising.

So there was an EA conference earlier this month, and though I didn’t go I did read this very interesting article about it from Dylan Matthews… along with a dozen rebuttals and concurrences and myriad angry tweets.

Apparently this conference had a big focus on existential risks. These are things that can end the species… big asteroid impacts, the sun exploding, that sort of thing. All well and good, I too am concerned about such things. So should we all be. The trouble is, as Matthews points out, that though you can quantify the number of deaths (the world population, give or take) it’s very hard to estimate how much effect your efforts will have in preventing this outcome.

Some proponents of the existential risk focus also account for the loss of all the potential future people who might never be born… that’s a bit too far, as far as I’m concerned. If humans went extinct through some peaceful means - playing too many video games instead of going outside, for example - that would be a bad thing, but it wouldn’t be hundreds of billions of times worse than the deaths of the presently alive people.

When it comes to existential risks I tend to be most concerned about those I can have an affect on… climate change, biodiversity loss, antibiotic resistance, vaccine resistance, that sort of thing. But this conference was in Silicon Valley, it was attended by computer programmers, and they ended up focusing on the risk of a malevolent artificial intelligence taking over the world. We need to invest in AI research to prevent such an outcome, they proclaim!

I feel like I should point out that in Mass Effect, which I’ve become far too familiar with this week, the future human civilization prevents hostile AIs not by investing in AI research, but by banning all AI research and paying my character to shoot AI researchers in the face. It would be foolhardy to base Effective Altruisms on such fictions. But how much better than fiction is a speculation unbounded by the burden of fact?

As for me, I’ll make my usual donations to OXFAM and MSF, content in the knowledge that I am trading uncertainty for certainty. And who knows, if traditional EA works well enough, I might help some sick children in Malawi grow up to be AI researchers.

No comments: