Neil Hopcroft

A digital misfit

A question of quality

“The constant release of new versions causes a lot of grief for IT professionals and CIOs would much prefer to wait for something that actually works and is not full of bugs.”

I see interesting times ahead…


20 comments

  1. Debian stable is a step in the right direction. However, it doesn’t solve the fundamental problem – software is crap.

    There are a number of reasons for this:

    – the desire for ever increasing feature sets
    – the widespread use of poor programming and testing practice
    – system complexity
    – varied hardware environments
    – rushed timescales
    – ease of patching later, once in the field
    – programmers are intp, they don’t communicate well with each other
    – unclear specifications, ratified for political rather than technical reasons

    • Plus, software is quite complicated, and it’s just hard work to do it right. It’s getting bigger and more complicated all the time, so it’s getting harder.

      I really don’t see how it will ever get any better.

      • Of course it *can* get better, its just whether it *will*?

        We are one of the problems [0]. By that I mean we make these things, we’re obsessed with complexity, with sneaky ways of doing things, with wrestling with the bits to make them come out right. Thing is, we can only sell the latest, the greatest if it really is the greatest, and by our definition that means laden with the most features.

        Of course, thats not true, the public doesn’t care what algorithms are used inside their DVD machine as long as they can watch Blackadder. But in order to justify our existence, create a need for the things we make we must always make last months model seem obsolete.

        We are also driven by making things work, if they work we want to move on to the next challenge, making the next thing work. We don’t think about how its going to be used, what happens if something goes wrong, because thats not sexy, thats not the latest.

        The public are complicit in this too, they are told (by people like us) that computers are intrinsically complicated, that they just have to learn to deal with that. This isn’t true, any more than saying something like ‘cars are intrinsically complicated’ – lots of people drive them without having to concern themselves with their intrinsic complexity, they pay a mechanic to do that bit [1].

        What can we do to make things less complicated? Right now we have some complexity problems, Windows is a horrible spagetti of dependent code, the Linux kernel is nearly as bad, both of these projects seem to be getting worse rather than better at the moment. There will come a day when they both become unmaintainable. What then?

        There are some ways out – the herd looks interesting, microkernels should be better suited to more complex systems since you get more firebreaks – better development practices, tools like UML (as in the modelling language not user mode linux) potentially offer a visualisation of complex problems in better way than flat C++ code – there was an interesting project involving a huge array of 6502 working parallelly, again, introducing firebreaks using lots of little simple components rather than one big scary one.

        [0] I don’t actually know if you, , are part of the problem, I know I am, and that a lot of my friends are too.
        [1] And computers should need a good deal less maintainance than cars

        • Oh, I’m certainly part of the problem. We’re building more software, on more complex platforms, against newer and woolier specs, than ever before. Much more of our stuff has to interwork with other systems than ever before.

          And our customers still vote for more features and for new releases with their wallets. They do not vote so clearly against the costs of change or complexity.

          We do care – because we do our own support, and because we are very interested in making these products do well – how things will be used and how manageable they are when they go wrong. But even so, the long trend is toward more complexity & more overall flake. Especially interop flake, as opposed to simple bugs.

          • Interoperability is a difficult problem – if you’re working with good standards it is easier, but saying to marketing people that ‘the server you are trying to talk to is broken’ doesn’t mean they think its OK to release a product like that….which I guess makes the whole thing worse over time.

            We need a good clean out and to start again, make the interweb2 or something, and learn all we can from what we have so far. But that will never happen, theres too much legacy support there already, and you can’t just switch….I guess IPv6 will give us some of the necessary transition but I’m guessing that’ll not actually be used to ditch the junk.

          • IPv6 will be totally transparent to most of the troublesome layers, I think. There’s just not a way to get a clean start – all we can do is try to keep the bits we work on tidy, and send large, cruel men to visit the standards bodies when they look like they’re having a brain-fart.

  2. I don’t know how the standards bodies get to be so mad. Some intimate & unhealthy relationship with the world of commerce, I think.

    I dunno about this magic-bullet stuff. I mean, better languages & so on ought to effect some sort of improvement, but the general ‘how can we make better software more quickly’ problem has been studied so hard, for so long, that I would be surprised if there were any magic bullets left to find.

    • Standards bodies

      Some intimate & unhealthy relationship with the world of commerce, I think.

      I think that’s the root of many of the problems. Most of the people on standards bodies are from commercial companies and have an interest in promoting that company’s intellectual property. The ideal standard, from a company’s point of view, is not the simplest one or the most technically innovative but the one that includes the largest number of its patents.

      • Re: Standards bodies

        They need to make sure they get a sensible balance so that people also actually implement the standard, it seems that the current approach is to drop your technology into the standard without telling anyone its yours, then, once people start actually making a profit from it, conjour up a patent from out of nowhere.

    • Of course theres a magic bullet, and once we see it we’ll be cursing ourselves for not spotting it earlier. Maybe we should be encouraging compsci undergrads to take more acid – it worked for art, maybe it’ll do the trick for us too.

  3. I’m all in favour of radical leaps and progress by smartness. But I guess I’m less optimistic than you are that a single solution like that exists.

    There are already thousands of CS researchers out there, trying different ways of going about the business of making software, thinking anywhere from just outside the box to ‘see that speck over there – that’s the box’. Coem to think of it, there are plenty of people at work on developing flying cars.

    It seems to me that the business of writing software is mostly about taking a complicated problem and cutting it up into bits that are simple enough for people to understand. It’s not really about the computers – it’s about the limits of our brains. Software innovations – Structured Programming, objects, and so on – are just abstractions that make stuff easier for us to comprehend.

    But there’s always going to be an irreducible complexity in the problem that we have to deal with, which can’t be fixed by tools.

    • Of course there is an intrinsic complexity in all problems, but what makes you think we can’t solve even the really complicated ones by being able to think about them in better ways? …remember, as recently as 50 years ago noone was really thinking about the possibility that we might be raising numbers to the power of a number 512bits long, but now its something we do nearly everyday without even realising we’re doing it, if we can make that kind of progress with mathematically complex problems, think what a decent bit of visualisation can do for some of the other taxing things out there?

      • Right – we can find solutions to particular well-described complex problems, and we can find somewhat better ways of working with the vast, tangly & undisciplined problems of making software. And I’m entirely in favour of that, and I’m sure we will go on making incremental improvements.

        But I don’t think we will ever make the complexity go away, or find some grand leap that just makes everything simple. I think the complexity is as much about what we want the computer to do, about the wooliness of our requirements & wishes, as it is about anything technical.

        • Sure, theres a bunch of people stuff, soft things that we are, which will make it hard to define the problem, but I don’t think that is a question of the complexity of the problem, merely its definition…

          Besides, all we need to do is bring the complexity within the bounds of our understanding (using any tools available to us), I wouldn’t expect the complexity to go away.

  4. But UML is a language…?

    Erm, no, well, not the way it is currently used anyway, though something like together[0] makes it more like it, with a nearly sensible roundtrip between the pictures and the code. (I once made the mistake of trying to run the Java version on a machine with less than 512Mb memory, it took something like twenty minutes to load during which time the UI hung)

    But thats all a bit of an aside, really, I’ve been thinking about some of this visualisation for a while now and wondering how it can be made to work. Starting from the ideas behind cube, and some of the UML pictures, I wondered if it would be possible to create a visual language based upon event flows through systems, perhaps with code snippets within processing boxes and state machines. None of this is very mature right now, but I’ve started coding up a prototype to see if it really can be useful – the intention is to make it easy to see what is going on.

    At the moment most of the code we create is made the way it is because its easier for computers to understand that way rather than because its easier for humans to understand….I’ll try to explain whats in my head next time I see you, so you can laugh at the strangeness of it all then.

    [0] http://www.togethersoft.com isn’t in at the moment, but it might just be sulky about serving to Japan for some reason…

    • Visual programming

      For a look at 2D visual programming you might want to look at FPGA development systems (I believe Altera’s got one for free download, and you can play with the systems you produce in a simulator). “Stick the boxes together” code + textual code have been integrated in this sort of environment for some time now (the first one I used must have been in 1998 or so and it was a mature technology then).

      For what it’s worth I find visual programming much harder and less flexible than textual programming since with textual programming you can use parameters to reconfigure large chunks of your code automatically (adding “n” ports to a device, for example), something that the FPGA visual systems don’t support (you’d have to draw each new port in and link it up manually). The fact that visual programming isn’t used at all by those producing large systems would seem to indicate that I’m not alone in my opinions of it. But then again, I find I simply can’t write worth a damn in a visual text processing system like Word while LaTeX feels like second nature…

      Whether you use graphics or text, I think the whole component programming idea is a good one (I’m writing a component based compiler system at the moment so I’m biased!), though it hasn’t made many inroads outside academia yet.

      • Re: Visual programming

        The way I’m thinking is that we’ve now got plenty of processor cycles to spare on desktop development machines, so those should be used to help the programmer create better programs with less braincells. Right now they’re forced to use semibraindead hacks like MFC, how about making pictures instead? That likely doesn’t suit everyone, but my head is better in two, or three, dimensions than in one (I’m also aware that my spacial awareness is considerably above average while my understanding of printed words, hence code, is considerably below average).

        Could you explain ‘component’ in the context you are using it – I’ve heard it before in a context of ActiveX, but I’m guessing thats not quite what you are talking about?

        • Re: Visual programming

          we’ve now got plenty of processor cycles to spare on desktop development machines, so those should be used to help the programmer create better programs with less braincells.

          Yes, especially since powerful development tools and complex compilers don’t need to (shouldn’t) lead to slower executable code. I don’t care if my compiler takes a little longer if it’s going to check the correctness of my program for me as well as compiling it. Making every new language that comes along interpreted just so that it’s easy to port between architectures is not necessarily a good thing (what happened to compilers that compiled to C and then let the system cc deal with that?).

          Could you explain ‘component’ in the context you are using it

          Borrowing from this paper: “A software component is an encapsulated piece of software with an explicit interface to its environment”. The crucial thing is that components can be put together in any configuration as long as any interfaces (an interface is essentially a list of function calls and events) that are joined are of the same type. I’ve not used ActiveX components but they’re apparently this sort of thing.

          The fun stuff happens when you can change the configuration of components at run time or send new components over a net connection and make the system light-weight enough that it can be used on a tiny 8-bit microcontroller! :-) Information on this should start to appear on my work web pages over the next few months as we get it working.

  5. One of the problems is that we’re chasing an average, as the computer systems get better at dealing with the tax system/pricing structure/loyalty points scheme/whatever the scheme gets more complex, always pushing the bounds of what the computer system can do (it has to, if not the computer vendor will have nothing to vend, last years machine will do the job).

    There really isn’t *that* much complexity in the real world – accounts systems are near trivial to code but people still get things wrong – maybe things like flight control systems and train routing and signalling algorithms have some intrinsic complexity, but they’re really not that difficult as problems go.

    Most of the complexity there is out there is the product of the computer (tech) industry to justify their own existence.

Leave a Reply

Your email address will not be published.