<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://rowehl.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://rowehl.com/" rel="alternate" type="text/html" /><updated>2026-03-14T20:18:39+00:00</updated><id>https://rowehl.com/feed.xml</id><title type="html">Miker</title><subtitle>Personal Blog</subtitle><entry><title type="html">Startup Superpowers</title><link href="https://rowehl.com/2026/03/14/startup-superpowers.html" rel="alternate" type="text/html" title="Startup Superpowers" /><published>2026-03-14T15:58:00+00:00</published><updated>2026-03-14T15:58:00+00:00</updated><id>https://rowehl.com/2026/03/14/startup-superpowers</id><content type="html" xml:base="https://rowehl.com/2026/03/14/startup-superpowers.html"><![CDATA[<p>I did something this year that I thought might never happen again during my
career. I got a “normal job”. I’ve been working exclusively on startups for at
least the last 25 years. With the few exceptions being times when we’ve sold a
project to a larger organization and I was there to help transition. And those
transitions haven’t always been very positive. That made me think that I was
just a startup person. Maybe I was just too feral to be able to exist in a
structured organization for any length of time. I was very wrong.</p>

<p>Because of a long list of things that would only distract from the main point
if I let myself get too much into them, I was at a fork in the road at the end
of last year. I could either start up a new business or try to find myself a
job somewhere. I was of course leaning toward the former. But also when I
spoke to folks I kept hearing how horrible the job market is currently. I’m
over 50 now, and particularly for folks in my age group I kept hearing that
finding a good technical role is extremely difficult. Technical management not
as much of a problem, but an individual contributor role would be different.</p>

<p>I certainly understand part of what folks see there. But I decided to go 
counter to the advice and at least try to find something inside an existing
organization as an individual contributor.</p>

<p>Along the way I’ve found that many startup skills have translated
insanely well to working inside a larger organization. That’s part of why I
wanted to put a post together about this. For the older tech folks who maybe
are having some issues, there might be a bunch of skills you have that you
just need to figure out how to highlight. This is my partial list of things
that surprised me in a positive direction when going into a more established
company.</p>

<p>The set of skills around having a bias towards action work well in a number of
different contexts. In the startup world
when I’m working with a cofounder or talking to a potential
customer I try to not just flat out say “no” to things. Interesting ideas come
from all over the place. And even when something on the surface doesn’t seem
workable, there’s often a core of interesting insight in the direction. I
didn’t think the set of techniques I’ve built up for those interactions would
be that valuable inside a bigger organization. But I’ve found the more often
I apply it the smoother everything goes. I just treat everyone around me like
my customers. I try to understand what they really want and not just what
they’re currently asking me for. If they’re asking me for things that don’t
seem to make sense I try to figure out what they’re trying to get done and how
I can do something that progresses them in that direction. Everyone is my
customer. The people managing the projects I’m helping with most obviously.
But I found that the more I applied the idea all over, the smoother everything
went. My teammates are my customers, my manager is my customer, whoever owns
any of the repos I’m checking things in to is my customer. I try to find some
way to make whatever I’m doing into something that helps them too. To my
surprise it’s pretty easy to do after a few decades of startup life. I figured
it might be hard to navigate the competing objectives in a larger organization,
but I’m not feeling that part at all.</p>

<p>Juggling competing objectives feels almost trivial in a normal job compared to
life in a startup. Sure, I have an actual task list to work my way through, with
expectations that I track what I’m doing, contribute to reviews, handle a few
async requests from other teams, keep an eye on what my coworkers are doing,
and learn about the systems I don’t already know along the way. And I’m sure as
I get more embedded in the organization the cadence of a bunch of those things
is probably going to change. But I’m used to that list of concerns plus hiring
a few new people, onboarding someone or getting an existing person ready for a
new role they’re taking over, investor discussions, fund raising, customer
onboarding, partner management, budgeting, product planning and tradeoffs,
support, and some bit of legal review. Now, for the most part, I just need to
get some bit of tech working. Sure, I might have to do it while the person who
would normally help get it done is overloaded with other stuff. But compared to
the average challenge I had to navigate before, totally doable.</p>

<p>And finally, the intensity is turned completely down. It’s easy to stay calm,
cool, and collected when most of the problems you have are just that there’s a
lot of work to get done. Sometimes it’s a bit of a scramble, and you might have
to get inventive in how you stage things so that you can ship a partially
working version while you figure out how to make the real version. But on my
scale of intense projects that’s like a 1 or 2 out of 10. It doesn’t make it
above a 7 until the current month payroll is hanging on the success or failure
of the prototype you’re trying to hack together in the reception area of your
next meeting. And if you can use that difference in your outlook to find a way
to make the rest of the team feel more comfortable, or use your perspective to
spot something that simplifies the situation for the rest of the team, you’re
also getting a leveraged value from that. Helping not just keep your part of 
the project on track, but helping the whole team find a way through. I’m sure
some of that is just from raw experience now, I’ve been doing this stuff for
more than 30 years now. I should be good at it. But the startup side of the
experience in particular seems to have cranked up the effectiveness of the
methods I’ve ended up being comfortable with.</p>

<p>So for the folks out there who maybe are wrestling with the idea of what comes
next. If you’ve been startuping, and you love your techie life, but it sounds
like maybe you really need to get into management because it should be “the
right thing to do” for someone your age. Please don’t accept that it’s what you
need to do. The world still needs good technical people building things. Please
don’t stop.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[I did something this year that I thought might never happen again during my career. I got a “normal job”. I’ve been working exclusively on startups for at least the last 25 years. With the few exceptions being times when we’ve sold a project to a larger organization and I was there to help transition. And those transitions haven’t always been very positive. That made me think that I was just a startup person. Maybe I was just too feral to be able to exist in a structured organization for any length of time. I was very wrong.]]></summary></entry><entry><title type="html">Modern RSS Usage</title><link href="https://rowehl.com/2026/01/05/modern-rss-usage.html" rel="alternate" type="text/html" title="Modern RSS Usage" /><published>2026-01-05T17:09:00+00:00</published><updated>2026-01-05T17:09:00+00:00</updated><id>https://rowehl.com/2026/01/05/modern-rss-usage</id><content type="html" xml:base="https://rowehl.com/2026/01/05/modern-rss-usage.html"><![CDATA[<p>There used to be a lot of attention and development focused on RSS, a format
meant to make it easy for users to subscribe to updates from a site and merge
multiple subscriptions into a personal news site. There were all kinds of 
homepage creators and news readers that ingested RSS, additional formats for
sharing your set of subscriptions or even how much you read and how much time
you spent on the different sources, and I had worked at one point on a search
engine specifically indexing feed content called Feedster. But as the
centralized networks like Twitter, YouTube, and Facebook gained popularity it
was easier for them to just provide a lot of the capabilities that people were
looking for. So instead of publishing on their own and using RSS to stitch
things together, more and more people just started and stayed on the
platforms. Most people describe this as the centralized platforms having “won”
over RSS. That’s true from a lot of different angles. But because RSS is an
open format it isn’t true from one very important angle. RSS didn’t go away.
It’s still available and supported on many platforms. Sometimes even on the
platforms that we think of as having run it over.</p>

<p>I’m unhappy with the way YouTube weights what shows up on my main page. I’ve
subscribed to lots of channels I actually care about. I might not watch every
video they put out, but I do care about what they post. If they post a few
videos about things I either happen to not care much about, or maybe I already
know a ton about so I don’t want to watch more, the algorithm decides I don’t
want to watch them any more. They disappear. Instead I get lots of
increasingly crappy videos. Shorts that are just movie clips with odd filters
applied and garbage quality videos with clickbait titles and great thumbnails.
I do like the serendipity of finding something new sometimes. But my main view
should really tilt more toward what I’ve already said I want to watch.</p>

<p>It’s pretty easy to make a version that does exactly that with RSS,
and as it turns out RSS is still available on YouTube. They even have the
headers on most pages that make it easy to find the RSS feed without having to
go digging around for it. So if you want to get the channels you care about 
from YouTube without allowing them to stop delivering the ones they don’t feel
like giving to you - RSS is at least an option.</p>

<p>RSS reading options aren’t as numerous as they once were. But there are some
good tools out there. Thunderbird has an option to add feeds into your inbox,
and I believe Outlook still has something to handle feeds in a similar way.
And when I looked at the App Center on my Ubuntu machine there were a few 
dedicated desktop feed reader apps that might work well. I wanted a project to
play with recently, so instead of just picking up an existing one I made a
very minimal command line reader called
<a href="https://github.com/mikerowehl/feeder">Feeder</a>. It’s a command line tool that
tracks feeds in a minimal SQLite database, and it just writes out an HTML page
with the posts to read so I can open it in a normal browser. Works perfect for
my use. I’m sure it might be too simple for others. But might be a good match
for some so I figured I would share it.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[There used to be a lot of attention and development focused on RSS, a format meant to make it easy for users to subscribe to updates from a site and merge multiple subscriptions into a personal news site. There were all kinds of homepage creators and news readers that ingested RSS, additional formats for sharing your set of subscriptions or even how much you read and how much time you spent on the different sources, and I had worked at one point on a search engine specifically indexing feed content called Feedster. But as the centralized networks like Twitter, YouTube, and Facebook gained popularity it was easier for them to just provide a lot of the capabilities that people were looking for. So instead of publishing on their own and using RSS to stitch things together, more and more people just started and stayed on the platforms. Most people describe this as the centralized platforms having “won” over RSS. That’s true from a lot of different angles. But because RSS is an open format it isn’t true from one very important angle. RSS didn’t go away. It’s still available and supported on many platforms. Sometimes even on the platforms that we think of as having run it over.]]></summary></entry><entry><title type="html">Builder Research - XMTP</title><link href="https://rowehl.com/2025/11/20/builder-research-xmtp.html" rel="alternate" type="text/html" title="Builder Research - XMTP" /><published>2025-11-20T04:16:00+00:00</published><updated>2025-11-20T04:16:00+00:00</updated><id>https://rowehl.com/2025/11/20/builder-research-xmtp</id><content type="html" xml:base="https://rowehl.com/2025/11/20/builder-research-xmtp.html"><![CDATA[<p>I ran across <a href="https://xmtp.org/">XMTP</a> a few times while I was poking around
with other services. But I had pushed it off till later. It wasn’t core to
what I was looking at. But then while I was playing around with Paragraph I
saw somewhere XMTP delivery was an option for people to subscribe to your
content. That was a pretty interesting use case, so I picked it up and tried
to experiment with what I could. Keep in mind that my use case is a bit tilted
toward that notification channel idea.</p>

<p>One of the best places I found to get an overview is the 
<a href="https://messari.io/report/xmtp-unifying-web3-communication">Messari research report</a>.
As of right now (Nov 2025) that report has started to get a bit stale, but it
was a great way to understand what some of the high level ideas where. There’s
a bunch of pretty marketing heavy copy on the main XMTP site. The use cases and
tools described on the project pages didn’t really match up to what I was
seeing in the repos and code. And I never was able to figure out how to get
Paragraph to send anything via XMTP.</p>

<p>I think this stuff is just the result of the project cycling at a pretty high
frequency and working on new things. That did make it necessary to separate
hype and froth from actual service though. When I saw
<a href="https://xmtp.org/miniapps">the mini apps</a> example showing Base and Farcaster
on top of the Paragraph stuff I had run across initially I thought the project
was up and running in full. I thought maybe I could get involved in some way,
even though I might be late to the party. I’m not sure that’s the case though.
I’m pretty sure everything is running on the testnet currently. If there is a
mainnet up and going now I think it might just be getting used for some of the
demo use cases. There is a
<a href="https://github.com/xmtp/xmtpd/issues/1148">mainnet ops epic</a>
on the project board, but I didn’t see any
info about it in the docs. I figured maybe the mainnet stuff was tucked away
behind actually funding an account, but my attempt to check out the funding
portal all ran into errors or permission requests. So I assume that’s all very
early stage. Which is all fine, it’s an early project.</p>

<p>Like when I had poked around with the
<a href="/2025/11/06/builder-research-basic-attention-token.html">Basic Attention Token</a>
initially, I just wanted to know if this was something I could just pick up
and build with. I don’t think it would be for most of the uses I have in mind
currently. Though I could certainly build something with it. I put the agents
together to demo a simple
<a href="https://github.com/mikerowehl/xmtp-notification">notification setup</a>, and it
works. In order to be useful as a notification system there has to be a
critical mass of people who drive their actions out of the inbox you’re
delivering to. I’m not seeing that be the case. That’s mostly because
of the target audience and not the technology. I’m currently thinking about some
crypto tools for online content creators. Generally content creators
want to lean on
things that are already mass behaviors, not drive new adoption. I could be
wrong there however. I could certainly see delivering a message about new
content into the same wallet you use to pay for that content yielding better
conversion rates for the content creator. But I assume if that was always the
case the XMTP options in Paragraph would be more prominent.</p>

<p>I do love the idea of the project. I like the idea of interoperability by
keeping the identity and inbox distinct. And messaging of some sort is a core
service, even if we might have to figure out the higher value versions if
there is postage in the system. I’ll likely keep poking around. Hopefully I
can find a small area where I can contribute a bit while keeping an eye on
how it evolves.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[I ran across XMTP a few times while I was poking around with other services. But I had pushed it off till later. It wasn’t core to what I was looking at. But then while I was playing around with Paragraph I saw somewhere XMTP delivery was an option for people to subscribe to your content. That was a pretty interesting use case, so I picked it up and tried to experiment with what I could. Keep in mind that my use case is a bit tilted toward that notification channel idea.]]></summary></entry><entry><title type="html">Builder Research - Basic Attention Token</title><link href="https://rowehl.com/2025/11/06/builder-research-basic-attention-token.html" rel="alternate" type="text/html" title="Builder Research - Basic Attention Token" /><published>2025-11-06T23:47:00+00:00</published><updated>2025-11-06T23:47:00+00:00</updated><id>https://rowehl.com/2025/11/06/builder-research-basic-attention-token</id><content type="html" xml:base="https://rowehl.com/2025/11/06/builder-research-basic-attention-token.html"><![CDATA[<p>I’ve recently started getting back into some blockchain development. A part of
that has been digging into some projects to figure out where I might be able
to help out, and just sharpening up my analysis when it comes to crypto
projects. A vast majority of the info you find on the crypto sites is focused
on the value of a token. I want to know what the state of a project is from
the builder perspective though. Is this something I can build around or on top
of to help create some value?</p>

<p>The <a href="https://basicattentiontoken.org/">Basic Attention Token</a> is a pretty OG
project. I suspect most of the folks who are interested in the project already
know the broad strokes of what it was meant to do. But in case you somehow
landed on this and you don’t already know about BAT, the super high level
overview is that the token was meant to bring to the forefront the value
exchange around attention that happens during advertising. Many of us don’t
think about it, but when it comes to a lot of the business on the internet, we
the users are the “product” that gets sold. It’s our attention (and the
potential to divert that attention into commerce) that really drives most of
the sites we use and think of as “free”. We normally aren’t aware of the big
machine that’s trying to process our attention into dollars for someone else
because it’s often intentionally hidden. And even when it isn’t hidden there
are often so many stakeholders involved in what’s going on that it can be hard
to unwind who is doing what and what data is involved. The Basic Attention 
Token was meant to simplify and make the value exchange explicit.</p>

<p>That was the high level pitch as it had existed a few years ago. And
admittedly I haven’t followed it closely in the time since. I’ve tried to
bring myself up to speed over the last month or so, and there might be some
things I’ve missed in trying to figure out the current situation. But I’m
pretty sure I’ve got the major parts right, at least as far as thinking about
the token as a component of a development plan. The TL;DR is
that the token itself seems to be in transition as Brave transfers to a
different vision of how crypto and blockchain should fit into their overall
strategy. If you’re interested in more detail than that I’ll try to lay it
out.</p>

<p>The flow of BAT through the system is supposed to mainly be in
support of creators. The central diagram on the BAT site shows a flow of BAT
from both advertisers and users to creators. And the logo itself is
<a href="https://www.reddit.com/r/BATProject/comments/n1yasp/what_does_the_bat_logo_represent/">meant to represent the relationship between advertisers, users, and creators</a>
in the system they’re trying to create. But it seems like that relationship
has broken down under the current system.</p>

<p>In the original whitepaper the compensation paid out to the creators was meant
to
be an automatic outflow. The browsers would use directly instrumented time on
property to figure out how much BAT should be directed to the publisher of a
piece of content, and it would happen automatically behind the scenes. This was
dropped as an option a while ago. I’m
certainly not going to fault them for this no longer being the system in use
currently. I could see there being some huge issues with this method of
distribution.</p>

<p>In the absence of the automated contribution I did expect there to be a
effective alternative. I pictured there being decent tools for supporting
creators - maybe some tools for creators to drive the process on their own.
In my mind that would be things like setting up ongoing contributions to your
favorite creators, maybe tools for token gating some content, options to setup
automatic BAT splits between multiple creators working on a collaboration.
When I was looking at this part of the market a few years ago that’s the kind
of stuff the crypto-forward creators were looking at. But so far I haven’t
really seen evidence of activity on the publisher side. I’ve had a really
hard time trying to size the outflow of payments to creators at all. There are
lots of numbers shared on the BAT site, but amount of BAT paid out to creators
isn’t one of the metrics. I would consider that core if the overall plan today
is the same as it was originally.</p>

<p>So I just did some legwork. I figured of all the folks likely to have BAT
contributions enabled, the blockchain and crypto media should skew higher
usage. And
since their audience is other crypto enthusiasts I assumed they would get 
some contributions. Figuring out how much usage there is for direct
contribution is admittedly kinda hard. The BAT token was on the Ethereum chain
to begin with, but it got bridged over to Solana to keep the costs down. That
makes sense. And there are a bunch of custodial systems that were built into
the Brave wallet to make it easy to sign up for the system and get users and
creators onboarded. But what I was surprised to see was that the existing
mechanisms didn’t smooth over the differences. I would often find someone who
did have contributions enabled, but when I went to contribute to them I wasn’t
able to because they’re on Uphold. That’s just broken. You have a user with
BAT in the wallet you told them to setup, and a creator who’s asking for BAT
to support their efforts, but it’s not possible to complete the transaction?
If that main loop in the system isn’t working consistently it was hard for me
to picture that the overall health was great.</p>

<p>The wallet I have setup on Brave is a direct Ethereum account though, so the
few times I was able to get someone who was signed up as a Brave Creator and
also had the right config on their side I was able to send them some Eth. And
of course I could see the account I was sending to for those donations. There
had been some other technologies in the mix at one point to keep users
anonymous, but in this version at least we’re able to see each other for Eth
donations. My system is obviously not scientific at all, but I just wanted to
get something of a feel for if publishers were getting paid out at all. And
from what I’ve been able to see, no. The few accounts I was able to get a bit
of a view into had very low activity. In one case, before my donation the last
deposit had been almost three years ago. And that was a pretty decent account
with a lot of activity that I would have expected to be a highlight.</p>

<p>But the BAT token itself is doing pretty well in terms of price, so what does
it really matter if the publishers aren’t ending up with the token? That
happens because the Brave Rewards system has the token as part of the process.
So advertisers on Brave need the token to be able to reward users, that puts
the token in a pretty decent place compared to lots of others that don’t have
any utility at all associated with them. That’s great for the price of the 
token, but not so much for the health of the ecosystem. For example, one of
the big selling points of Brave is that it does a lot of ad blocking and
stripping unessential content by default. (It really is a great browser by the
way. It’s my daily driver browser and they’re doing great work over there, I 
just don’t think the BAT token is working). If your browser is stripping out
ads that normally makes the content creators somewhat upset, cause they don’t
get paid for the views of their content. If you have an alternative way
for them to get compensated there’s a great answer to the content creators
concerns. But if you’re both blocking ads and not really minding how the new
system could be funneling them alternative value, there could be some issues
down the line.</p>

<p>And there are some other efforts that seem a bit odd in relation to the core
flow. Brave was encouraging a memecoin called Guano that
rewarded folks for locking up their BAT. And they’ve made a partnership to
allow for registering a .brave domain using your BAT. These things aren’t 
necessarily bad, but they don’t seem to be in service to the core mechanism
the token is supposed to support. The cynical side of me would say they seem
like the kind of thing I would do if I needed to
increase velocity of the token and provide some utility exits for BAT that was
pooling with users. But instead I’ll just say that these alternative ways to
use BAT just further starve out the creators. There are a few posts on the
community forum from creators who had been making some BAT for a while and
have seem a pretty drastic drop-off recently. I wouldn’t be surprised if that
drop-off correlated with the Guano launch. Why would users donate if they can
earn from their locked up BAT? Especially when actually using the BAT to
donate doesn’t even work a lot of the time?</p>

<p>So that’s my long winded version. Brave is open source by the way, the
community seems to be pretty accepting. My hope had been that I would find
something useful I could do to help out for the project. I know it’s not all
on Brave to solve every problem on their own. However, with the core loop of
the currency looking like it has stalled, I don’t think it makes
sense to try to build with it. It seems like the roll of the token now
is shifting to a new model. When I see the things like their domain 
registration partnership and native IPFS support I think maybe there’s another
story starting to emerge around evening out access to infrastructure for folks
underserved by existing providers. There’s been mention on the community calls
of a revised roadmap coming up, hopefully that clears some of this up. I’ll
cycle back around to it when there are some updates.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[I’ve recently started getting back into some blockchain development. A part of that has been digging into some projects to figure out where I might be able to help out, and just sharpening up my analysis when it comes to crypto projects. A vast majority of the info you find on the crypto sites is focused on the value of a token. I want to know what the state of a project is from the builder perspective though. Is this something I can build around or on top of to help create some value?]]></summary></entry><entry><title type="html">Google Oauth Tokens in Golang</title><link href="https://rowehl.com/2020/10/23/google-oauth-tokens-in-golang.html" rel="alternate" type="text/html" title="Google Oauth Tokens in Golang" /><published>2020-10-23T20:26:00+00:00</published><updated>2020-10-23T20:26:00+00:00</updated><id>https://rowehl.com/2020/10/23/google-oauth-tokens-in-golang</id><content type="html" xml:base="https://rowehl.com/2020/10/23/google-oauth-tokens-in-golang.html"><![CDATA[<p>I’m pretty sure I’ve run into this before and worked through it. But for some
reason my searches didn’t land on something that clicked till I was a few
iterations in. So I’m putting down a simple example so that when I search for
it next time I’ll find it… hopefully.</p>

<p>This is specifically for building a web app that authenticates against
<a href="https://developers.google.com/identity/sign-in/web/sign-in">Google Sign-In</a>.
The web part of the setup is super simple to get working, and there are tons
of examples and tutorials. Once I got to the phase of sending the token to a
backend service to validate things got a little more rocky. First thing to
note is that the
<a href="https://developers.google.com/identity/sign-in/web/backend-auth">key to use when verifying a token</a>
is shared across all services. Cause I had mostly been following info in
tutorials I didn’t realize that for a while. I was trying to figure out how to
use the info from my project admin UI to validate. It was only after running
across another example of verifying a Google based token that I realized there
was a PEM somewhere that held the verification key.</p>

<p>The info on that page says the Google API Client Libraries are the suggested
way to validate a token. But honestly I can’t get the Go version working for
this simple case. I thought I was using the libraries properly, cause they
seemed simple enough to just validate a token. I even found
<a href="https://stackoverflow.com/a/62984078/506507">a Stackoverflow sample</a>
that was almost exactly the same as what I had done. But I keep getting an
error out of the library that the token is invalid. I actually think there’s
some other dependency in that library. The error isn’t that the token I’m 
providing is invalid, but that the Google API Client assumes I have some kind
of identity setup (that Stackoverflow answer says to “provide your Google
credentials to your application”). I didn’t really want to figure out what
that was all about. And those Google API Client libs are crazy huge in their
Go implementation. When I did a go get on them it downloaded almost a gig of
objects. So I started trying out some alternatives.</p>

<p>There’s a
<a href="https://github.com/dgrijalva/jwt-go">nice clean JWT library</a>
that looks like it’s still in pretty regular use. And it does work very well.
It can just be a bit tough to dig up a good RSA based example of validation.
There are some convenience functions in jwt-go that make it pretty simple to
setup. But there are a bunch of examples around that seem to pre-date some of
the convenience functions. So even the good samples can sometimes be a bit 
intimidating.  Plus the tokens that you get back from Google embed a key id
that you need to use to lookup the right certificate from their public
listing. So I put together a sample using a current version of jwt-go:</p>

<script src="https://gist.github.com/mikerowehl/3094498f6227571c736d3662e4cb2ae5.js"></script>

<p>This is just a sample mind you, you don’t want to be fetching a chunk of JSON
over http in the key lookup function if you’re using this in some middleware.
But the ideas are all there I believe: verify that the signing method is the
expected algorithm, use the key ID from the token header to find the right
verification key, and uses a nice convenience function to parse the PEM from
Google very simply into a form usable as the return from the key lookup.</p>

<p>You should be able to just go run it as a standalone, pass in the token you
want to verify, and you get back the claims from the token if everything
worked. If things don’t work definitely use the
<a href="https://developers.google.com/identity/sign-in/web/backend-auth">tokeninfo endpoint</a>
before you dig in too much. That service can really simplify basic things,
like pulling out the wrong string from a web response to use as a token. Not
that I would every do that, no. But I’ve heard from friends that they’ve had
that problem …</p>]]></content><author><name></name></author><summary type="html"><![CDATA[I’m pretty sure I’ve run into this before and worked through it. But for some reason my searches didn’t land on something that clicked till I was a few iterations in. So I’m putting down a simple example so that when I search for it next time I’ll find it… hopefully.]]></summary></entry><entry><title type="html">Service Container Manipulation</title><link href="https://rowehl.com/2020/09/24/service-container-manipulation.html" rel="alternate" type="text/html" title="Service Container Manipulation" /><published>2020-09-24T16:39:00+00:00</published><updated>2020-09-24T16:39:00+00:00</updated><id>https://rowehl.com/2020/09/24/service-container-manipulation</id><content type="html" xml:base="https://rowehl.com/2020/09/24/service-container-manipulation.html"><![CDATA[<p>The Github actions functionality has been pretty spectacular, in particular
<a href="https://docs.github.com/en/actions/guides/about-service-containers">service containers</a>
for setting up a test environment. One bit of annoyance though can be trying
to do setup on a container. When the container gets created the
repo hasn’t been checked out yet. So I can’t do the normal mount of some init
into a postgres container like I would using docker-compose locally. There are
lots of ways to work around that, but I just ran across an example that uses
docker on the runner image and info from 
<a href="https://docs.github.com/en/actions/reference/context-and-expression-syntax-for-github-actions#job-context">the job context</a>
to exec a few tasks on a service container.</p>

<script src="https://gist.github.com/mikerowehl/187a0a8685079ef5c8c1bd6bf71797a4.js"></script>

<p>Mind blown. For some reason I thought those service containers were tucked
away somewhere special and not exposed directly on the runner system. What a
fantastic tool to have available.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[The Github actions functionality has been pretty spectacular, in particular service containers for setting up a test environment. One bit of annoyance though can be trying to do setup on a container. When the container gets created the repo hasn’t been checked out yet. So I can’t do the normal mount of some init into a postgres container like I would using docker-compose locally. There are lots of ways to work around that, but I just ran across an example that uses docker on the runner image and info from the job context to exec a few tasks on a service container.]]></summary></entry><entry><title type="html">ktool for Building OSDev Samples</title><link href="https://rowehl.com/2020/09/10/ktool-with-osdev-samples.html" rel="alternate" type="text/html" title="ktool for Building OSDev Samples" /><published>2020-09-10T13:53:00+00:00</published><updated>2020-09-10T13:53:00+00:00</updated><id>https://rowehl.com/2020/09/10/ktool-with-osdev-samples</id><content type="html" xml:base="https://rowehl.com/2020/09/10/ktool-with-osdev-samples.html"><![CDATA[<p>The other day I packaged up
<a href="https://github.com/mikerowehl/ktool">a build of cross-compile tools for i686 bare images</a>
using the
<a href="https://wiki.osdev.org/GCC_Cross-Compiler">instructions from the OSDev wiki</a>
for making a cross-compiler suitable for building bootable images and kernels.
I had started fooling around with dockcross to do the builds, but some of the
OSDev samples explicitly complain if you use a compiler that can be used to
build for a Linux target. So I just packaged up something of my own and made
a super simple wrapper script to make it less cumbersome to run.</p>

<p>Today I added a few tools so that it’ll easily build a workable image for the
<a href="https://wiki.osdev.org/Meaty_Skeleton">Meaty Skeleton sample</a> with very
little setup required. Here’s how to do it from a clean system (with Docker
already installed):</p>

<script src="https://gist.github.com/mikerowehl/85ba878ef3b22b0154256f9bbcd04ee9.js"></script>

<p>I like the way it worked out. At first the Docker image was huge cause I did
the tool download, build and cleanup as different steps. Now there’s a big
and kinda ugly stage in the Dockerfile to download, build, install, and then
clean up all as one big chunk. But the result is a relatively trim container
that makes the startup a lot simpler.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[The other day I packaged up a build of cross-compile tools for i686 bare images using the instructions from the OSDev wiki for making a cross-compiler suitable for building bootable images and kernels. I had started fooling around with dockcross to do the builds, but some of the OSDev samples explicitly complain if you use a compiler that can be used to build for a Linux target. So I just packaged up something of my own and made a super simple wrapper script to make it less cumbersome to run.]]></summary></entry><entry><title type="html">Using Docker to Cross Compile</title><link href="https://rowehl.com/2020/09/04/using-docker-to-cross-compile.html" rel="alternate" type="text/html" title="Using Docker to Cross Compile" /><published>2020-09-04T13:57:00+00:00</published><updated>2020-09-04T13:57:00+00:00</updated><id>https://rowehl.com/2020/09/04/using-docker-to-cross-compile</id><content type="html" xml:base="https://rowehl.com/2020/09/04/using-docker-to-cross-compile.html"><![CDATA[<p>I’ve been poking around with a bunch of low level programming stuff. Looking
at old source code and playing with some newer examples, like
<a href="http://3zanders.co.uk/2017/10/13/writing-a-bootloader/">Alex Parker’s fantastic example of a boot loader with assembly and C++</a>.
There’s lots of interesting stuff out there. But I never like having to setup
a cross compiler on my system. I used to do a ton of that, and the toolchain
would always get messed up somehow.</p>

<p>So I considered just playing around with any of this low level compiling stuff
inside a Linux VM. Not fantastic, but I’m sure it would work well. But then I
realized that really all I needed was an isolated toolchain, so that should be
something worth setting up in Docker. The more I thought about it the more it
sounded like a great idea. A great enough idea that I was sure someone had
thought about it and done it already.</p>

<p>Check out the <a href="https://github.com/dockcross/dockcross">dockcross</a> project for
a fantastic example of a cross compile toolchain in a Docker container. They 
have a bunch of supported target systems and a nice wrapper script that makes
it easy to call the toolchain as if it was running locally. I used the
dockcross/linux-x86 image to compile the examples from Alex Parker’s blog
posts. I’m still running nasm locally, but the compile and link are done using
the tools from the docker image. Slightly different command line, but I think
that’s just caused by different defaults. I haven’t fully gone through all the
options, so I can’t vouch for them being correct for all cases. But they leave
you with a running boot sector that loads a routine from C++:</p>

<script src="https://gist.github.com/mikerowehl/ffaad752adc98a6286c6297d0a63c44b.js"></script>]]></content><author><name></name></author><summary type="html"><![CDATA[I’ve been poking around with a bunch of low level programming stuff. Looking at old source code and playing with some newer examples, like Alex Parker’s fantastic example of a boot loader with assembly and C++. There’s lots of interesting stuff out there. But I never like having to setup a cross compiler on my system. I used to do a ton of that, and the toolchain would always get messed up somehow.]]></summary></entry><entry><title type="html">Booting MMURTL in VirtualBox</title><link href="https://rowehl.com/2020/09/03/booting-mmurtl-in-virtualbox.html" rel="alternate" type="text/html" title="Booting MMURTL in VirtualBox" /><published>2020-09-03T12:50:00+00:00</published><updated>2020-09-03T12:50:00+00:00</updated><id>https://rowehl.com/2020/09/03/booting-mmurtl-in-virtualbox</id><content type="html" xml:base="https://rowehl.com/2020/09/03/booting-mmurtl-in-virtualbox.html"><![CDATA[<p>I was looking through some of the old books on my bookshelf when I ran across
my copy of 
<a href="/assets/images/32bit_os_book.jpg">Developing Your Own 32-Bit Operating System</a>.
I spent a bunch of time playing around with the code when I picked up the book,
uh… more than 20 years ago. Wow.
Eventually Linux took over a lot of that interest however and I haven’t looked
at the book in decades. It was a wonderful learning system that I picked up a
ton of experience from. And that was back before we had decent emulation and
virtualization services to work with. I wondered if the code was still floating
around and if anyone was using it.</p>

<p>I was excited to see that Richard Burgess has
<a href="http://www.ipdatacorp.com/mmurtl/">a site up with the source and a PDF of the book</a>.
And the site encourages folks to share and contribute, so there’s a
<a href="https://github.com/the-grue/MMURTL-OS">copy up on GitHub</a> that’s a bit
patched up and includes some binary images. Yay!</p>

<p>I believe it has been just over 20 years since I used MMURTL. So I wanted to
just boot the thing up and refresh my memory before I dug in too much. Best
place to start I figured was one of the
<a href="https://github.com/the-grue/MMURTL-OS/tree/master/images">floppy disk images</a>
that the grue was nice enough to publish in his repo. I created a new DOS
machine in VirtualBox and attached the floppy image, and I did get the
MMURTL monitor kinda up. But partially cause I didn’t remember how things
worked and partially cause of some quirks in the setup it took a few tries.</p>

<p>If you’re trying to do the same thing and run into issues using the images
with VirtualBox there are two gotchas to keep in mind. First off, if you
create a machine in VirtualBox and attach a new disk image to it for the
hard drive, that hard drive image is completely uninitialized. MMURTL doesn’t
handle that well. So at first I was having a problem where MMURTL would start
up and scan through devices, but fail while it was setting up the hard
drives. A quick scan through the source made it obvious that the setup was
failing cause there was no partition table. So I removed the hard disk image.
But then the setup was failing cause there was no hard drive. What I ended
up doing was attaching the blank hard disk image to a Linux VM I already had
configured and using that to partition the drive. Then I reattached it to the
MMURTL VM and it finished booting up. Yay!</p>

<p>But when it booted up it just sat there. That wasn’t what I remembered from
when I had been experimenting years ago. It wasn’t completely obvious from
what was on the screen what was going on. When MMURTL first boots up it’s in
a monitor program (described in chapter 11 of the book, starting on page 142)
and you need to use one of the function keys to get it to display some info
or launch into a CLI. The F1 key launches the CLI, which started up fine for
me and dropped me where I expected to end up!</p>

<p>Now that I think about it, I believe I had just configured the default job
file for my monitor to launch into a CLI automatically all those years ago.
So the behavior from the image posted is probably totally reasonable.
Still, in case you’re coming into this fresh I figured this could be a 
useful tip.</p>

<p>I have no idea what I’ll end up doing with this. I have been doing some 6502
single board hackery. And that did get me a bit fired up again for doing some
low level work. Not sure if I’m fired up enough to get myself fully back into
proper Linux kernel dev. But maybe hacking around a smaller system like this
would be some fun.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[I was looking through some of the old books on my bookshelf when I ran across my copy of Developing Your Own 32-Bit Operating System. I spent a bunch of time playing around with the code when I picked up the book, uh… more than 20 years ago. Wow. Eventually Linux took over a lot of that interest however and I haven’t looked at the book in decades. It was a wonderful learning system that I picked up a ton of experience from. And that was back before we had decent emulation and virtualization services to work with. I wondered if the code was still floating around and if anyone was using it.]]></summary></entry><entry><title type="html">Finished a Program 29 Years Later</title><link href="https://rowehl.com/2013/11/19/finished-a-program-29-years-later.html" rel="alternate" type="text/html" title="Finished a Program 29 Years Later" /><published>2013-11-19T22:20:00+00:00</published><updated>2013-11-19T22:20:00+00:00</updated><id>https://rowehl.com/2013/11/19/finished-a-program-29-years-later</id><content type="html" xml:base="https://rowehl.com/2013/11/19/finished-a-program-29-years-later.html"><![CDATA[<p>My parents were getting ready to sell their house a few months ago, and
while I was there I grabbed a few 
<a href="https://twitter.com/miker/status/330731191168806912">super old computer books</a>
from when I was a kid. Nostalgia I suppose, and I figured they would be cool
to have around. What I didn’t expect was that I would find a printout and a
few pages of handwritten notes from a program I was working on when I was 9.
But of course, once I did, the only rational response was to see if I could
finally get it working. Which it now is.</p>

<p>While I know it definitely isn’t
the only 6510 related project on Github, I have to assume it’s one of a very
small number of CBM Basic programs. It’s a
<a href="https://github.com/mikerowehl/c64-assembler/blob/master/assembler.bas">6510 assembler written in Basic</a>.
The Commodore 64 was the machine I had when I started to really get into 
programming. When the thing booted up it had Basic burned into ROM, which was
a fantastic way to get things going. But I knew what was really going on 
underneath was all machine language, and that the real programmers wrote in
assembly. The Commodore 64 didn’t come with a built in assembler or machine
monitor, like some of the other 8 bit systems at the time did. And we didn’t
have a ton of money, so getting one of the commercial assemblers at the time
wasn’t really an option. So I figured I would write my own.</p>

<p>I actually spent tons of time manually compiling instructions and poking them
into memory back then, which was insanely educational. I forgot how close we
were to the hardware all the time back then. And trying to work out how to
build an assembler on my own was probably a fantastic set of experience to
have built up. I ended up getting my hands on a commercial assembler
eventually though, and never really finished up my own version.</p>

<p>Like most programming problems, the issue wasn’t really with the code, it was
with the data abstractions.
The program was actually functional as it was, it was just kinda buggy, and
relatively convoluted cause it was trying to parse the operands to figure out
which instruction to use. I wanted to get the program working, but I didn’t
want it to take me too long either. So I ended up cheating and used
non-standard mnemonics for the instructions to pack info about addressing mode
and index registers into the opcode instead. For instance instead of using
“cmp ($1a,x)” for indirect indexed addressing I swapped to using “cmpix $1a”.
The expressiveness is the same, it just means I can pull in a token at at time
without any need to apply logic. The thing really becomes just a lookup table.</p>

<p>So in a grand total of about 45 minutes, including the time to figure
out how to use the VICE emulator and load Basic programs into it without
having to type it into the emulator, I had a working program that was able to
run the example code I had been trying out 29 years ago. I think if 9 year old
Miker could see that 45 minutes of hacking and the result he would be pretty
impressed.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[My parents were getting ready to sell their house a few months ago, and while I was there I grabbed a few super old computer books from when I was a kid. Nostalgia I suppose, and I figured they would be cool to have around. What I didn’t expect was that I would find a printout and a few pages of handwritten notes from a program I was working on when I was 9. But of course, once I did, the only rational response was to see if I could finally get it working. Which it now is.]]></summary></entry></feed>