There’s been a lot of talk lately about trying to get Sun to open source Java (my personal favorite article is at theRegister). The arguments are good on both sides, and the debates have brought to light some areas that aren’t very well explored. I don’t care all that much about Java, it’s not one of my preferred languages. But the question of “why open source?” is an interesting one in general. I think I agree with the ideas that Mitch Kapor spoke about at an SDForum event, that open source is the best way to keep software development honest. During the dotcom bubble lots of companies were able to rapidly suck insane amounts of money out of the economy. They’re extreme examples, but really that’s what goes on with most software companies. Software companies need to create artificial barriers to protect their competitive advantage. They need to protect their margins. Depending on the techniques used to do that it can result is software which is radically overpriced. Open source is one method for trying to correct this problem with the market.
I signed up on del.icio.us a while ago, and just started adding some bookmarks to it today. I read through the docs page and was going to post about it, but I found an excellent post on edtech.teacherhosting.com describing the principles quite well. I hadn’t noticed the inbox feature before. If you’re logged in and viewing someone else’s bookmarks, you have an option to subscribe to them. Your inbox is a set of bookmarks from the people who you’ve subscribed to. Much more social than I thought initially from just looking at the front page. I thought it would just provide a way to store my bookmarks in a central place. I had created a linkblog to collect quick links of passing interest that I might want to refer back to later. del.icio.us is a great way to do that instead. All I need to do is setup a mail to del.icio.us gateway so that I can use the “mail this article” option to setup bookmarks from my phone (this has ended up being realy handy in lots of instances).
There’s a set of posts all linked together at Many2Many about research on blogging. It’s an interesting thread, even if I do think that academic research about blogging is a bit of an odd topic. One point I will bring up is that traditional methods are not set in stone. Elijah says at one point that the discussion that goes on in blog posts and trackback’s isn’t the same as formal peer review. I can see the point there, but I think I disagree. I’m not an academic, so perhaps I’m missing some nuance to the meaning there. But I am a software developer, and we have a strong parallel experience in open source. Open source software has made a very strong case for an informal process open the public being at least on par with a formal process. Maybe if the blog based methods of communication aren’t living up to the old methods, we’re just not doing them quite correctly yet. I have every expectation that those issues will get worked through at some point, and future modes of communication will look a lot more like blogging (or at least borrow heavily from this set of mechanisms we’re evolving now).
There is what appears to be a very insightful article over at TheFeature about what is traditionally called the “triple play” of services: voice, data, and video. I heard a decent amount about that coupling of services when I was working with companies trying to structure deals with telcos and ISPs. I know it’s something that people at these organizations are paying attention to. But I never quite put the push to get TV on handsets together with the general principles of the coupling of services. Put in that light I can see why they would be going for something like this. I’m not really sure that it’s valid overall, but it certainly does reframe the effort. And if we ignore the short term issues and look at the pie in the sky, there might be a workable model in there somewhere. However, that doesn’t mean that long term technologies should be forced on the public before either the public or the technology is ready for them. The analysis is great, but although I now might understand the telco position a little better I don’t excuse them for their behavior towards their customers and potential application innovators.
I’m going over to BlogOn 2004 at UC Berkeley on July 22nd and 23rd. I wasn’t able to go to SuperNova, so I’m really happy that there’s another conf I can go to so soon. No info about wireless access, public metablog for trackbacks, or a wiki however. I’ll get right on that…
Two recent posts from Always On:
There’s a post over at TheFeature about trying to get the traditional news business to adapt to mobile information delivery. I know my own experiences are probably way out of line with the norm, but I’ve built up a little list of feeds that I can scan when I’ve got a few minutes worth of time. My mobile feeds are the ones that provide the best summary versions (or have very short entries to begin with), and I scan through them with my phone. If I find something interesting, I email the link to myself so that I can check up on the full version when I get back to my computer. Not perfect, but it works out pretty well.
I was just re-reading a couple of articles by J.C.R. Licklider. They’re only available as PDF as far as I’ve seen, sorry about that. In particular the second article, “Computer as Communication Device”, really struck me as a discussion about “the backchannel” that appears commonly at technical conferences these days. Take a look at the section called “Face to face through a computer” and I think you’ll see that Licklider and Taylor are talking about much the same phenomenon as we witness today, except they were talking about it back in 1968. This is a quote from that section of the paper:
I’m playing the home edition of Supernova 2004. Listening to the audio streams, watching the Wiki for updates, keeping an eye on the metablog, and chatting on the IRC channel on irc.freenode.net. The term used to refer to dealing with all the incoming information is continuous partial attention. I always think of one of the main characters from Idoru when I try to describe it. Gibson is always so oddly apt in his characterizations. One of the characters is a researcher who can spot some kind of metapatterns in large amounts of raw data to pull out the information that he wants. It’s something like letting your eyes unfocus and just trying to spot anything changing in your entire field of view, except that you have multiple streams of input instead of a unified field to take into account. It’s an amusing exercise for anyone interested in computer interfaces. Apparently you can pass the information saturation point of an interface and still provide meaningful information if you split it into multiple channels that can be processed in parallel. Assuming whoever this information is aimed at can operate in “continuous partial attention mode”.
I personally like to get more than one viewpoint whenever I can, and I think The Torturous World of Powerpoint provides an excellent complement to the Art of Positioning and Presenting talk that Bill Joos gave at the Art of the Start.
subscribe via RSS