I’ve been staring into the singularity.
All of this kinda makes sense on a technical level, but I’m not sure I quite understand the political and/or economic factors involved here, so I thought I’d write a little about those.
There are, I suppose, a few possibilities. Given that we can’t see what happens beyond the point of the singularity we can only explore the time leading up to it. In many ways I guess we can look at the development of general purpose computing for some ideas of how a seed AI might be grown into a post-singularity Power. At the point of singularity there will be a single system which transcends.
Early computers were often installed by large companies, with the concept of a home computer non-existant. But now we have a number of different parts to the computing industry – home computing, server systems, open source, handheld systems, etc.
Which of these have an analogy in AI terms? I guess there are really four possibilities: corporate, government, academic or personal.
It could be a corporate entity that creates the seed AI – why would a corporate entity do this? There would have to be some kind of profit motive. If a corporate AI were to transcend that would allow for a significant commercial advantage should you be the first to own such a thing. In the weeks after transcendence they would be able to gain control of all resources available on the planet, and perhaps beyond.
Government would have a similar incentive to control the first transcendance, though with a slightly different stance, perhaps with a view to gaining control of territory rather than purely financial gain.
Academia is a bit more of a loose canon, there are a number of potential incentives in academia, most likely wishing to create a kind-of Star Trek like world where noone is wanting for anything, its all just there.
Personal of course is even more difficult to envisage, but also far less likely, due to the funding necessary to create such a seed. The most likely way for this to happen would be with the creation of something like the FSF, a non-profit organisation of people working together toward a common goal of an AI for the common good.
So what? Essentially what I see from this is that the seed AI will take on different forms depending on who ‘owns’ it – it likely won’t remain owned in any sense for long, since it’ll outthink its owners fairly quickly, however if we are not on the same side as the owners (in some sense) we are likely to have a hard time at transcendence.
There are of course some other consdierations if the singularity is going to happen within our lifetimes. What happens to ownership? Does that make any sense as a concept beyond the singularity? Probably not. If thats the case, what is the argument for continuing to pay into a pension?
How do we know its coming? I’m going to assume I’m not going to be a part of the team working directly with the transcendant AI. Which means that all I’ll know about what is going on is what I read in the news. This news coverage will be a bit like internet time gone crazy, it’ll ramp up getting faster and faster. We’ll get a good sense of whats happening maybe a year in advance of the actual transcendance day – I’m not sure it could sneak up on us much faster than that, but things will start making less and less sense as we get closer.
17 comments