The Github actions functionality has been pretty spectacular, in particular service containers for setting up a test environment. One bit of annoyance though can be trying to do setup on a container. When the container gets created the repo hasn’t been checked out yet. So I can’t do the normal mount of some init into a postgres container like I would using docker-compose locally. There are lots of ways to work around that, but I just ran across an example that uses docker on the runner image and info from the job context to exec a few tasks on a service container.
The other day I packaged up a build of cross-compile tools for i686 bare images using the instructions from the OSDev wiki for making a cross-compiler suitable for building bootable images and kernels. I had started fooling around with dockcross to do the builds, but some of the OSDev samples explicitly complain if you use a compiler that can be used to build for a Linux target. So I just packaged up something of my own and made a super simple wrapper script to make it less cumbersome to run.
I’ve been poking around with a bunch of low level programming stuff. Looking at old source code and playing with some newer examples, like Alex Parker’s fantastic example of a boot loader with assembly and C++. There’s lots of interesting stuff out there. But I never like having to setup a cross compiler on my system. I used to do a ton of that, and the toolchain would always get messed up somehow.
I was looking through some of the old books on my bookshelf when I ran across my copy of Developing Your Own 32-Bit Operating System. I spent a bunch of time playing around with the code when I picked up the book, uh… more than 20 years ago. Wow. Eventually Linux took over a lot of that interest however and I haven’t looked at the book in decades. It was a wonderful learning system that I picked up a ton of experience from. And that was back before we had decent emulation and virtualization services to work with. I wondered if the code was still floating around and if anyone was using it.
My parents were getting ready to sell their house a few months ago, and while I was there I grabbed a few super old computer books from when I was a kid. Nostalgia I suppose, and I figured they would be cool to have around. What I didn’t expect was that I would find a printout and a few pages of handwritten notes from a program I was working on when I was 9. But of course, once I did, the only rational response was to see if I could finally get it working. Which it now is.
One of the nice things about getting FFOS 1.2 on my device is being able to use App Manager instead of the simulator plugin to do development. Given that the App Manager replaces the Simulator Dashboard in the newest versions of Firefox, it seems like the kind of thing developers should have access to. So hopefully ZTE figures out a way to get a 1.2+ release on their developer phones.
I picked up a ZTE Open Firefox OS device a little while ago. Given that developer hub says it’s a “powerful device aimed at developers and early adopters worldwide” I figured it would be good for some hackery. I read the specs, so I knew that “powerful” should be pretty suspect. I was surprised to find out that it’s not really for developers, and increasingly doesn’t seem to be all that open.
A few weeks ago Nathen Harvey was kind enough to stop by and give us a critique of how we do Chef automated testing and some workflow suggestions. We had been using chef-solo to do some of the recipe development and automated testing, but all our real deployments were done using chef-client. And the differences between solo runs and chef-client runs kept biting us. We had seen the stuff Lookout Mobile has done to run a VM for the chef server in addition to the node during testing. But that seemed like an awful lot of overhead. Fortunately Nathen gave us the awesome hack of using different organizations for the different developers of chef rules, different organizations for CI, and then the production organization. That gives us complete isolation of the different working areas, but still keeps setup relatively simple. w00T!
A few weeks ago Arte and Mario asked me to swing by to chat with folks participating in the Momentum accelerator to talk about scaling technology. While we were talking I pointed folks to a few posts and videos of talks I consider to be some of the root nodes of a lot of other conversations. I’m not sure I’ve ever pulled this together before.
How do you start introducing some testing if you have a huge group of existing projects, for the most part all implemented using different languages and technologies? That’s the problem I’ve been poking at recently. The first issue is that none of the technology choices were made with testability in mind. And I don’t want to have to go through and run a bunch of code refactoring and reorganization just to start testing. I would much rather start testing, and then start introducing changes to make things easier to test and to increase the coverage. The second issue is that we’re pretty sure there are some major architectural changes and redesigns coming in the short term. So to dig in and harness up the guts of a bunch of systems we know will be changing really shortly anyway also doesn’t seem like a fantastic idea. Plus some of the changes are more about the operating environment than the code itself, and a bunch of unit tests won’t necessarily help us figure out if the web services break when we move from MySQL 5.5 to 5.6.
subscribe via RSS