Clay Shirky has a post titled Situated Software available online. There are some excellent general points in there, but I’m left with a nagging feeling about the piece as a whole. I think there are some aspects which he captures well, but there are other areas which I think are still major hang-ups for the software community as a whole. Allow me to give an example. I do something I call software prototyping. For some people this makes infinite sense right off the bat, but most say something like “What do you mean by prototyping? What do you prototype?” It doesn’t really matter what I prototype, because prototyping properly done is all about a discovery and exploration process not about particular technologies. In the course of prototyping I end up implementing a lot of small and relatively pointed systems. At one point in the essay Clay says:
Businesses routinely ask teams of well-paid people to put hundreds of hours of work creating a single PowerPoint deck that will be looked at in a single meeting. The idea that software should be built for many users, or last for many years, are cultural assumptions not required by the software itself.
Maybe I’m just concentrating on an area that I shouldn’t be, but I’ve written applications to be used in EXACTLY the same sense. I’ve written programs which were meant to be used just for a single presentation. And if that presentation went well the understanding was that the application would be developed out if needed. I’ve also done plenty of programming on small systems meant to be used by limited groups, like 4 people working on a shared website. I think that the general availability of open source programs has made it very easy to cobble together example applications and simple limited use solutions.
I think these usages have always existed to some degree. And I think that Clay’s point about the “that won’t scale” argument is well placed. That argument has killed a lot of worthwhile discussion. But there’s another area, the “that already exists” argument of good software engineering. I get crap from experienced programmers all the time because I build applications with an eye toward user needs, and not an eye toward existing implementations. And it is hard to balance these two requirements. Sometimes the right way to deal with a problem is to slightly tweak an existing application, sometimes the problem requires a different outlook altogether. How does one know when they’re taking the proper approach to servicing a subgroup of users and when they’re simply “reinventing the wheel” without justification? I think this is an area that will remain unresolved for a long time. Hopefully the end scarcity that he talks about will spur some reversal of this tyranny of software engineering. I would certainly like to see more engineers in a position where they feel they can experiment with solutions before having to commit. This does require a major ethos change, but I think it would be for the best.
As a sidenote, if you haven’t seen Worse is Better, which is linked to at the beginning of the post, it’s a very interesting read. I don’t think I had run across that before.