Neil Hopcroft

A digital misfit

I’ve been staring into the singularity.

All of this kinda makes sense on a technical level, but I’m not sure I quite understand the political and/or economic factors involved here, so I thought I’d write a little about those.

There are, I suppose, a few possibilities. Given that we can’t see what happens beyond the point of the singularity we can only explore the time leading up to it. In many ways I guess we can look at the development of general purpose computing for some ideas of how a seed AI might be grown into a post-singularity Power. At the point of singularity there will be a single system which transcends.

Early computers were often installed by large companies, with the concept of a home computer non-existant. But now we have a number of different parts to the computing industry – home computing, server systems, open source, handheld systems, etc.

Which of these have an analogy in AI terms? I guess there are really four possibilities: corporate, government, academic or personal.

It could be a corporate entity that creates the seed AI – why would a corporate entity do this? There would have to be some kind of profit motive. If a corporate AI were to transcend that would allow for a significant commercial advantage should you be the first to own such a thing. In the weeks after transcendence they would be able to gain control of all resources available on the planet, and perhaps beyond.

Government would have a similar incentive to control the first transcendance, though with a slightly different stance, perhaps with a view to gaining control of territory rather than purely financial gain.

Academia is a bit more of a loose canon, there are a number of potential incentives in academia, most likely wishing to create a kind-of Star Trek like world where noone is wanting for anything, its all just there.

Personal of course is even more difficult to envisage, but also far less likely, due to the funding necessary to create such a seed. The most likely way for this to happen would be with the creation of something like the FSF, a non-profit organisation of people working together toward a common goal of an AI for the common good.

So what? Essentially what I see from this is that the seed AI will take on different forms depending on who ‘owns’ it – it likely won’t remain owned in any sense for long, since it’ll outthink its owners fairly quickly, however if we are not on the same side as the owners (in some sense) we are likely to have a hard time at transcendence.

There are of course some other consdierations if the singularity is going to happen within our lifetimes. What happens to ownership? Does that make any sense as a concept beyond the singularity? Probably not. If thats the case, what is the argument for continuing to pay into a pension?

How do we know its coming? I’m going to assume I’m not going to be a part of the team working directly with the transcendant AI. Which means that all I’ll know about what is going on is what I read in the news. This news coverage will be a bit like internet time gone crazy, it’ll ramp up getting faster and faster. We’ll get a good sense of whats happening maybe a year in advance of the actual transcendance day – I’m not sure it could sneak up on us much faster than that, but things will start making less and less sense as we get closer.


17 comments

  1. i don’t understand why computer intelligence should be related to some kind of malevolence or at least selfishness. human needs are driven by a few layers of flight response and resource hoarding instinct on top of a pain reaction to cold and lack of food. unless an intelligence had some kind of internal structure that depended on resource availability then it wouldn’t have any drive to achieve power or control access to the resource it needs. even self awareness doesn’t predicate survival instinct.

    however, it is clear that singularity does imply that the main moral precepts upon which the legal structure is based would no longer be applicable. the question is whether you get some kind of social revolution when the legal system is incapable of keeping up with the technology.

    • I’m envisaging that the transcendant power would inherit a bunch of ideals from its creators. Unfortunately the most likely people to fund such a project would only do so on condition that they could some profit from doing so. Which is completely irrelevant since we enter a whole different game beyond the singularity anyway.

      • Anonymous

        singinst

        Check out the Singularity Institute, and the SL4 email list. Plenty of discussion has gone on about working on improving the all-important (probably) initial conditions of the Singularity. The Institute is a direct outgrowth of the efforts of the author of that article you linked to.

        Make sure to read up on the extensive archives before attempting any post to SL4, as it is one of the most closely moderated lists around.

        • Re: singinst

          For sure, but they look a scary bunch, I don’t really have the background to fit in there, I’m just a simple code integrator.

  2. Here’s hoping those gods are on our side…

    To some extent the majority just need to be kept out the way long enough that they don’t interfere with the research to create the seed AI.

  3. I must admit, I’ve never understood why the singulatarians take the idea so seriously. It seems to me to be based on a raft of unexamined assumptions inherited directly from the movies. For example:

    If we build a smart AI, will it go off to build smarter AIs ? Well, it might, but that would depend on what its motivations were and on what it was good at. The question of motive is quite separate from the question of smartness, and we have no idea what motives we might be able to put in our AIs when we eventually work out how to build one. There’s a whole science of AI motivation that we haven’t even started yet. The notion that they will automatically (or even probably) go off to to build their successors seems to me to be a terrible failure of imagination.

    • Well, yes and no, we only need one to get motivated to both improve itself and not destroy everything and we’re out this phase of our history. Our problem is that our tiny minds can’t deal with what it *could* look like should that happen – there might be some limit on how much intellegence is actually possible in a concious unit, or that everything with an IQ of 6000 turns into Marvin.

      I like the ideas and can kinda follow the logic, but I’m not convinced that the economics (or social structures) allow it to work.

      • Right – one might get motivated in exactly the way their story goes, with the predicted results. But there’s a dizzying wealth of other interesting possible stories. Here’s a few:

        – When we have built an AI, we find that motive and intelligence are sufficiently separable that our AIs continue to have the motives we choose for them. The long story is that they may become very smart but they do not run amok.

        – When we get to the engineering limits, we find that compute-per-watt or compute-per-kilo have tough ceilings, so we can have a few huge very smart AIs but only a few.

        – Autonomous manufacturing or nanotech turns out to have limits that mean that amok AIs have a hard time replicating. Hence, we have physical war with them over access to resources.

        I could go on, for hours, and I don’t even do this for a living. I think the whole singulatarian notion is just ignoring a lot of these questions, making for a quasi-religious insistence on a particular story.

        What I don’t understand is, why are they ruining such an interesting question by answering it in such an unimaginative way ?

        • For sure theres a lot of possiblities – and I know I’m never going to get my head around most of them. One of the things we do know is that science is progressing, and it is likely to continue to do so at more and more rapid rate until something happens to stop that. It feeds off itself, it gets stronger. Following that trend gets us to something that looks (from here, now) like a singularity….but is that a valid extrapolation?

          Theres a lot of exciting stuff ahead, but its also extremely scary, I’m not sure we can prepare for that kind of transition, its something that, if it is going to happen, will happen outside of our control.

          On of the big mistakes people are making is projecting human emotions onto these ‘powers’, seeing AIs that need to fight for resources or otherwise see humankind as a threat. It probably doesn’t need to be like that.

          Interesting times, indeed, and I would like to be a part of it, whether there is a singularity at the ‘end’ of it or not.

          • I agree completely. I don’t think it is a valid extrapolation – the only excuse for claiming ‘singularity’ is the scenario where the AIs keep making themselves smarter at an accelerating rate.

            Without that, the rate of technological advance will continue to be limited by the rate at which people can make sense of the new tech & assimilate it into their lives.

            Plus, individual technologies do tend to frustrate their forecasters by developing unexpected hard limits. Remember, we still don’t have out individual jet-cars!

          • http://www.moller.com/ …which admittedly are still a couple of years out, but they’re on their way.

            Things develop in ways we don’t expect them to – thats not to say that some of this stuff is impossible, just that we don’t yet know what the problems are and whether they have solutions. We can only see so far along the development line.

            Energy is going to be a crunch point over the next few years, we’ll need more and more of it until we can figure out more efficient ways of doing things.

            If the AIs can make themselves smarter (without turning into Marvin) then we end up in a Matrix-style future instead, at some level its a no win situation, at another, I can’t wait to see what happens.

          • They’ve been on their way, just a couple of years away, for decades.

            Do you think you’d move into the Infinite Holodeck if you could, go virtual ? :-)

          • True.

            I’d like to visit but not move in….we’ll have to see what options are available by the time that comes to be.

          • I’ve had a revelation. I think I see what the very-bright AIs will be for.

            The starting question is, given that AIs are most suited to making ‘information goods’, where is the most money currently made *from* IG ? That’s where AI will first be put to work, and so is where the power of self-smartening AI will first be deployed.

            And the answer is, of course, the media. TV, film, advertising. That’s where all the money is, that’s where there’s vast flexible demand, so that’s where AI can bring a competitive edge.

            Imagine a world where all the media were highly personalised, hugely compelling, infinitely persuasive. All movies would thrill or terrify you to the precise extent their makers intended. All adverts would make you want, really really want, the products they depicted. All news stories would convince you completely of the views they proffered.

            It won’t take long before a couple of media conglomerates have all the effective control in the world, since control of the news means control of the voters. And once they have that control, they will never relinquish it. After that, nothing will be the same. That’s your singularity right there.

          • twenty minutes into the future

            …I’m not sure that media does have that power, it seems that way at the moment but I think theres a big backlash against it coming, and its going to be quite a shock for them when it happens. Everyone is already quite disallusioned with politics and inequity it encourages – sooner or later something crashes down.

  4. Interesting questions to me are:

    Where are we going to get the infinite energy from for the infinite intelligence?

    What happens to the failed, autistic, or insane trial intelligences that get built? If we build one and it slumps into depression or refuses to talk to us, what do we do? Reboot it?

    • Thats what all the type I/II civilisations is all about, isn’t it? We start by dismantling Jupiter and turning it into a Dyson Sphere…most of the sun goes to lighting the universe with the kind of pin-prick light we see of other stars.

      But, yes, there are some ethical problems about what to do with the kinds of lows that’ll come with hyperintelligent manic-depressives.

Leave a Reply

Your email address will not be published.