On Friday, I said goodbye to a colleague as she left us after most of a decade with the company. Then this morning, all hell broke loose on some production servers.
It turns out that the API key that connected our application to our feature flag management platform was associated with her account, and hadn’t shown up in the exit audit.
Let this be your reminder to go check where, if anywhere, your applications are using person-specific keys where they should be using generic ones!
With its few-columns and large hit-areas, the game’s well-optimised for mobile play.
The premise is simple enough:
5-column solitaire game with 1-5 suits.
23 cards dealt out into those columns; only the topmost ones face-up.
2 “reserve” cards retained at the bottom.
Stacks can be formed atop any suit based on value-adjacency (in either order, even mixing the ordering within a stack)
Individual cards can always be moved, but stacks can only be moved if they share a value-adjacency chain and are all the same suit.
Aim is to get each suit stacked in order at the top.
Well this looks like a suboptimal position…
One of the things that stands out to me is that the game comes with over five thousand pre-shuffled decks to play, all of which guarantee that they are “winnable”.
Playing through these is very satisfying because it means that if you get stuck, you know that it’s because of a choice that you made2,
and not (just) because you get unlucky with the deal.
After giving us 5,105 pregenerated ‘decks’, author Zach Gage probably thinks we’ll never run out of playable games. Some day, I might prove him wrong.
Every deck is “winnable”?
When I first heard that every one of FlipFlop‘s pregenerated decks were winnable, I misinterpreted it as claiming that every conceivable shuffle for a game
of FlipFlop was winnable. But that’s clearly not the case, and it doesn’t take significant effort to come up with a deal that’s clearly not-winnable. It only takes a
single example to disprove a statement!
If you think you’ve found a solution to this deal – for example, by (necessarily) dealing out all of the cards, then putting both reserve kings out and stacking everything else on top
of them in order to dig down to the useful cards, bear in mind that (a) the maximum stack depth of 20 means you can get to a 6, or a 5, but not both, and (b) you can’t then move any
of those stacks in aggregate because – although it’s not clear in my monochrome sketch – the suits cycle in a pattern to disrupt such efforts.
That it’s possible for a fairly-shuffled deck of cards to lead to an “unwinnable” game of FlipFlop Solitaire means the author must have necessarily had some
mechanism to differentiate between “winnable” (which are probably the majority) and “unwinnable” ones. And therein lies an interesting problem.
If the only way to conclusively prove that a particular deal is “winnable” is to win it, then the developer must have had an algorithm that they were using to test that a given
deal was “winnable”: that is – a brute-force solver.
So I had a go at making one3.
The code is pretty hacky (don’t judge me) and, well… it takes a long, long time.
This isn’t an animation, but it might as well be! By the time you’ve permuted all possible states of the first ten moves of this starting game, you’re talking about having somewhere
in the region of three million possible states. Solving a game that needs a minimum of 80 moves takes… a while.
Partially that’s because the underlying state engine I used, BfsBruteForce, is a breadth-first optimising algorithm. It aims to
find the absolute fewest-moves solution, which isn’t necessarily the fastest one to find because it means that it has to try all of the “probably stupid” moves it
finds4
with the same priority as the the “probably smart” moves5.
If you pull off a genuinely random shuffle, then – statistically-speaking – you’ve probably managed to put that deck into an order that no deck of cards has never been in
before!6
And sure: the rules of the game reduce the number of possibilities quite considerably… but there’s still a lot of them.
So how are “guaranteed winnable” decks generated?
I think I’ve worked out the answer to this question: it came to me in a dream!
Show this puzzle to any smarter-than-average child and they’ll quickly realise that the fastest way to get the solution is not to start from each programmer and trace
their path… but to start from the laptop and work backwards!
The trick to generating “guaranteed winnable” decks for FlipFlop Solitaire (and, probably, any similar game) is to work backwards.
Instead of starting with a random deck and checking if it’s solvable by performing every permutation of valid moves… start with a “solved” deck (with all the cards stacked
up neatly) and perform a randomly-selected series of valid reverse-moves! E.g.:
The first move is obvious: take one of the kings off the “finished” piles and put it into a column.
For the next move, you’ll either take a different king and do the same thing, or take the queen that was exposed from under the first king and place it either in an empty
column or atop the first king (optionally, but probably not, flipping the king face down).
With each subsequent move, you determine what the valid next-reverse-moves are, choose one at random (possibly with some kind of weighting), and move on!
In computational complexity theory, you just transformed an NP-Hard problem7
into a P problem.
Once you eliminate repeat states and weight the randomiser to gently favour moving “towards” a solution that leaves the cards set-up and ready to begin the game, you’ve created a
problem that may take an indeterminate amount of time… but it’ll be finite and its complexity will scale linearly. And that’s a big improvement.
I started implementing a puzzle-creator that works in this manner, but the task wasn’t as interesting as the near-impossible brute-force solver so I gave up, got distracted,
and wrote some even more-pointless code instead.
If you go ahead and make an open source FlipFlop deck generator, let me know: I’d be interested to play with it!
Footnotes
1 I don’t get much time to play videogames, nowadays, but I sometimes find that I’ve got
time for a round or two of a simple “droppable” puzzle game while I’m waiting for a child to come out of school or similar. FlipFlop Solitaire is one of only three games I
have installed on my phone for this purpose, the other two – both much less frequently-played – being Battle of Polytopia and the
buggy-but-enjoyable digital version of Twilight Struggle.
2 Okay, it feels slightly frustrating when you make a series of choices that are
perfectly logical and the most-rational decision under the circumstances. But the game has an “undo” button, so it’s not that bad.
4 An example of a “probably stupid” move would be splitting a same-suit stack in order to
sit it atop a card of a different suit, when this doesn’t immediately expose any new moves. Sometimes – just sometimes – this is an optimal strategy, but normally it’s a pretty bad
idea.
5 Moving a card that can go into the completed stacks at the top is usually a good idea…
although just sometimes, and especially in complex mid-game multi-suit scenarios, it can be beneficial to keep a card in play so that you can use it as an anchor for something else,
thereby unblocking more flexible play down the line.
6 Fun fact: shuffling a deck of cards is a sufficient source of entropy that you can use
it to generate cryptographic keystreams, as Bruce Schneier demonstrated in 1999.
7 I’ve not thought deeply about it, but determining if a given deck of cards will result
in a winnable game probably lies somewhere between the travelling salesman and the halting problem, in terms of complexity, right? And probably not something a right-thinking person
would ask their desktop computer to do for fun!
Unlike Alice, who spent the year reading papers with a pencil in hand, scribbling notes in the margins, getting confused, re-reading, looking things up, and slowly assembling a
working understanding of her corner of the field, Bob has been using an AI agent. When his supervisor sent him a paper to read, Bob asked the agent to summarize it. When he needed
to understand a new statistical method, he asked the agent to explain it. When his Python code broke, the agent debugged it. When the agent’s fix introduced a new bug, it debugged
that too. When it came time to write the paper, the agent wrote it. Bob’s weekly updates to his supervisor were indistinguishable from Alice’s. The questions were similar. The
progress was similar. The trajectory, from the outside, was identical.
Here’s where it gets interesting. If you are an administrator, a funding body, a hiring committee, or a metrics-obsessed department head, Alice and Bob had the same year. One paper
each. One set of minor revisions each. One solid contribution to the literature each. By every quantitative measure that the modern academy uses to assess the worth of a scientist,
they are interchangeable. We have built an entire evaluation system around counting things that can be counted, and it turns out that what actually matters is the one thing that
can’t be.
…
The strange thing is that we already know this. We have always known this. Every physics textbook ever written comes with exercises at the end of each chapter, and every physics
professor who has ever stood in front of a lecture hall has said the same thing: you cannot learn physics by watching someone else do it. You have to pick up the pencil. You have to
attempt the problem. You have to get it wrong, sit with the wrongness, and figure out where your reasoning broke. Reading the solution manual and nodding along feels like
understanding. It is not understanding. Every student who has tried to coast through a problem set by reading the solutions and then bombed the exam knows this in their bones. We
have centuries of accumulated pedagogical wisdom telling us that the attempt, including the failed attempt, is where the learning lives. And yet, somehow, when it comes to AI
agents, we’ve collectively decided that maybe this time it’s different. That maybe nodding at Claude’s output is a substitute for doing the calculation yourself. It isn’t. We knew
that before LLMs existed. We seem to have forgotten it the moment they became convenient.
Centuries of pedagogy, defeated by a chat window.
…
This piece by Minas Karamanis is excellent throughout, and if you’ve got the time to read it then you should. He’s a physics postdoc, and this post comes from his experience in his own
field, but I feel that the concerns he raises are more-widely valid, too.
In my field – of software engineering – I have similar concerns.
Let’s accept for a moment that an LLM significantly improves the useful output of a senior software engineer (which is very-definitely disputed, especially for the “10x” level of claims we often hear, but let’s just take it as-read for now). I’ve
experimented with LLM-supported development for years, in various capacities, and it certainly sometimes feels like they do (although it sometimes also feels like they have the
opposite effect!). But if it’s true, then yes: an experienced senior software engineer could conceivably increase their work performance by shepherding a flock of agents through a
variety of development tasks, “supervising” them and checking their work, getting them back on-course when they make mistakes, approving or rejecting their output, and stepping in to
manually fix things where the machines fail.
In this role, the engineer acts more like an engineering team lead, bringing their broad domain experience to maximise the output of those they manage. Except who they manage is… AI.
Again, let’s just accept all of the above for the sake of argument. If that’s all true… how do we make new senior developers?
Junior developers can use LLMs too. And those LLMs will make mistakes that the junior developer won’t catch, because the kinds of mistakes LLMs make are often hard to spot and require
significant experience to identify. But if they’re encouraged to use LLMs rather than making mistakes by hand and learning from them – to keep up, for example, or to meet corporate
policies – then these juniors will never gain the essential experience they’ll one day need. They’ll be disenfranchised of the opportunity to grow and learn.
It’s yet to be proven that more-sophisticated models will “solve” this problem, but my understanding is that issues like hallucination are fundamentally unsolvable: you might
get fewer hallucinations in a better model, but that just means that those hallucinations that slip through will be better-concealed and even harder to identify in code review
or happy-path testing.
Maybe – maybe – the trajectory of GPTs is infinite, and they’ll keep getting “smarter” to the point at which this doesn’t matter: programming genuinely will become a natural language
exercise, and nobody will need to write or understand code at all. In this possible reality, the LLMs will eventually develop entire new programming languages to best support their
work, and humans will simply express ideas and provide feedback on the outputs. But I’m very sceptical of that prediction: it’s my belief that the mechanisms by which LLMs work has a
fundamental ceiling – a capped level of sophistication that can be approached but never exceeded. And sure, maybe some other, different approach to AI might not have this
limitation, but if so then we haven’t invented it yet.
Which suggests that we will always need experienced engineers to shepherd our AIs. Which brings us back to the fundamental question: if everybody uses AI to code, how do we
make new senior developers?
I have other concerns about AI too, of course, some of which I’ve written about. But this one’s top-of-mind today, thanks to Minas’ excellent article. Go read it to learn more about how
physics research faces a similar threat… and, perhaps, consider how your own field might need to face this particular challenge.
Some days, developing Three Rings is about being hunched over a keyboard alone in the middle of the night, swearing at Rubygem incompatibilities.
But just ocassionally it’s about getting together in beautiful places with some of the most dedicated geeks I know… to swear about Rubygem incompatibilities.
Either way, a walk in the garden can lead to the insight that gets you to the solution.
The other day I needed to solve a puzzle1. Here’s the essence of it: there was a grid of 16 words. They needed to be organised into four thematic “groups” of four words each;
then each group needed to be sorted alphabetically.
Each item in each group had a two-character code associated with it: these were to be concatenated together into a string and added to a pastebin.com/... URL. The correct
four URLs would each contain a quarter of the answer to the puzzle.
Apparently this puzzle format is called “Only Connect” and is based on a TV show?2
I’m sure I could have solved the puzzle. But I figured it’d be more satisfying to solve a different puzzle, with the same answer: how to write a program
that finds the correct URLs for me.
I’m confident that this approach was faster.3
Or rather: it would have been if it hadn’t been for the fact that I felt the need to subsequently write a blog post about it.
Here’s how it works:
It creates an array containing the 43,680 possible permutations of 4 from the 16 words.
If sorts the permutations and removes duplicates, reducing the set to just 1,820.
It removes the bit of each that isn’t the two digit code at the end and concatenates them into a URL.
It tries each URL, with short random gaps between them, listing each one that isn’t a 404 “Not found” response.4
I kicked off the program and got on with some work. Meanwhile, in the background, it permuted the puzzle for me. Within a few minutes, I had four working pastebin URLs, which
collectively gave me the geocache’s coordinates. Tada!
Was this cheating?
I still solved a puzzle. It probably took me, as a strong programmer, about as long as it would have taken me to solve the puzzle the conventional way were I a strong… “only
connect”-er5.
But I adapted the puzzle into a programming puzzle and solved it a completely different way, . Here’s the arguments, as I see them:
Yes, this was cheating. This wasn’t the way the puzzle author intended it to be solved. Inelegantly brute-forcing a problem isn’t “solving” it, it’s sidestepping
it. If everybody did this, there’d be no point in the author putting the time into the puzzle in the first place.
No, this wasn’t cheating. This solution still required solving a puzzle, just a different one. A bad human player making a lucky guess would be fine. It’s
a single-player game; play any way that satisfies you. Implementing software to assist is no worse than asking a friend for help, as others have done.
Click on a 😡 or a 🧠 to let me know whether you think I cheated or not, or drop me a comment if you’ve got a more-nuanced opinion.
2 Don’t try to solve this one; it’s randomly generated.
3 This version of the program is adapted to the fake gameboard I showed earlier. You won’t
get any meaningful results by running this program in its current state. But you could quickly adapt it to a puzzle of this format, I suppose.
4 It occurred to me that it could have been more-efficient to eliminate from the list any
possibilities that are ruled-out by any existing finds… but efficiency is a balancing act. For a program that you’ll only run once – and in the background, while you do other things,
to boot – there’s a tipping point at which it’s better to just get it running than it is to improve its performance.
5 There’s a clear parallel here to the various ways in which I’ve
solved jigsaw-puzzle-based geocaches, because I’m far more interested in (a) programming and (b) getting out into the world and finding geocaches in interesting places than I am
in doing a virtual jigsaw puzzle!
Last night I was chatting to my friend (and fellow Three Rings volunteer) Ollie about our respective
workplaces and their approach to AI-supported software engineering, and it echoed conversations I’ve had with other friends. Some workplaces, it seems, are leaning so-hard into
AI-supported software development that they’re berating developers who seem to be using the tools less than their colleagues!
That’s a problem for a few reasons, principal among them that AI does not
make you significantly faster but does make you learn less.1. I stand by the statement that AI isn’t useless, and I’ve experimented with it for years. But I certainly wouldn’t feel very comfortable
working somewhere that told me I was underperforming if, say, my code contributions were less-likely than the average to be identifiably “written by an AI”.
Even if you’re one of those folks who swears by your AI assistant, you’ve got to admit that they’re not always the best choice.
I ran into something a little like what Ollie described when an AI code reviewer told me off for not describing how my AI agent assisted me with the code change… when no AI had been
involved: I’d written the code myself.2
I spoke to another friend, E, whose employers are going in a similar direction. E joked that at current rates they’d have to start tagging their (human-made!) commits with fake
AI agent logs in order to persuade management that their level of engagement with AI was correct and appropriate.3
Supposing somebody like Ollie or E or anybody else I spoke to did feel the need to “fake” AI agent logs in order to prove that they were using AI “the right way”… that sounds
like an excuse for some automation!
I got to thinking: how hard could it be to add a git hook that added an AI agent’s “logging” to each commit, as if the work had been done by a
robot?4
Turns out: pretty easy…
To try out my idea, I made two changes to a branch. When I committed, imaginary AI agent ‘frantic’ took credit, writing its own change log. Also: asciinema + svg-term remains awesome.
Here’s how it works (with source code!). After you make a commit, the post-commit hook creates a file in
.agent-logs/, named for your current branch. Each commit results in a line being appended to that file to say something like [agent] first line of your commit
message, where agent is the name of the AI agent you’re pretending that you used (you can even configure it with an array of agent names and it’ll pick one at
random each time: my sample code uses the names agent, stardust, and frantic).
There’s one quirk in my code. Git hooks only get the commit message (the first line of which I use as the imaginary agent’s description of what it did) after the commit has
taken place. Were a robot really used to write the code, it’d have updated the file already by this point. So my hook has to do an --amend commit, to
retroactively fix what was already committed. And to do that without triggering itself and getting into an infinite loop, it needs to use a temporary environment variable.
Ignoring that, though, there’s nothing particularly special about this code. It’s certainly more-lightweight, faster-running, and more-accurate than a typical coding LLM.
Sure, my hook doesn’t attempt to write any of the code for you; it just makes it look like an AI did. But in this instance: that’s a feature, not a
bug!
Footnotes
1 That research comes from Anthropic. Y’know, the company who makes Claude, one of the
most-popular AIs used by programmers.
3 Using “proportion of PRs that used AI” as a metric for success seems to me to be just
slightly worse than using “number of lines of code produced”. And, as this blog post demonstrates, the
former can be “gamed” just as effectively as the latter (infamously) could.
4 Obviously – and I can’t believe I have to say this – lying to your employer isn’t a
sensible long-term strategy, and instead educating them on what AI is (if anything) and isn’t good for in your workflow is a better solution in the end. If you read this blog post and
actually think for a moment hey, I should use this technique, then perhaps there’s a bigger problem you ought to be addressing!
Today, an AI review tool used by my workplace reviewed some code that I wrote, and incorrectly claimed that it would introduce a bug because a global variable I created could “be
available to multiple browser tabs” (that’s not how browser JavaScript works).
Just in case I was mistaken, I explained to the AI why I thought it was wrong, and asked it to explain itself.
To do so, the LLM wrote a PR to propose adding some code to use our application’s save mechanism to pass the data back, via the server, and to any other browser tab, thereby creating
the problem that it claimed existed.
This isn’t even the most-efficient way to create this problem. localStorage would have been better.
So in other words, today I watched an AI:
(a) claim to have discovered a problem (that doesn’t exist),
(b) when challenged, attempt to create the problem (that wasn’t needed), and
(c) do so in a way that was suboptimal.
Humans aren’t perfect. A human could easily make one of these mistakes. Under some circumstances, a human might even have made two of these mistakes. But to make all three? That took an
AI.
What’s the old saying? “To err is human, but to really foul things up you need a computer.”
Highlight of my workday was debugging an issue that turned out to be nothing like what the reporter had diagnosed.
The report suggested that our system was having problems parsing URLs with colons in the pathname, suggesting perhaps an encoding issue. It wasn’t until I took a deep dive into the logs
that I realised that this was a secondary characteristic of many URLs found in customers’ SharePoint installations. And many of those URLs get redirected. And SharePoint often uses
relative URLs when it sends redirections. And it turned out that our systems’ redirect handler… wasn’t correctly handling relative URLs.
It all turned into a hundred line automated test to mock SharePoint and demonstrate the problem… followed by a tiny two-line fix to the actual code. And probably the
most-satisfying part of my workday!
Further analysis on a smaller pcap pointed to these mysterious packets arriving ~20ms apart.
This was baffling to me (and to Claude Code). We kicked around several ideas like:
SSH flow control messages
PTY size polling or other status checks
Some quirk of bubbletea or wish
One thing stood out – these exchanges were initiated by my ssh client (stock ssh installed on MacOS) – not by my server.
…
In 2023, ssh added keystroke timing obfuscation. The idea is that the speed at
which you type different letters betrays some information about which letters you’re typing. So ssh sends lots of “chaff” packets along with your keystrokes to make it hard for an
attacker to determine when you’re actually entering keys.
That makes a lot of sense for regular ssh sessions, where privacy is critical. But it’s a lot of overhead for an open-to-the-whole-internet game where latency is critical.
…
Keystroke timing obfuscation: I could’ve told you that! Although I wouldn’t necessarily have leapt to the possibility of mitigating it server-side by patching-out support for (or at
least: the telegraphing of support for!) it; that’s pretty clever.
Altogether this is a wonderful piece demonstrating the whole “engineer mindset”. Detecting a problem, identifying it, understanding it, fixing it, all tied-up in an engaging narrative.
And after playing with his earlier work, ssh tiny.christmas – which itself inspired me to learn a little Bubble Tea/Wish (I’ve got Some Ideas™️) – I’m quite excited to see where this new
ssh-based project of Royalty’s is headed!
This is a blog post about things that make me nostalgic for other things that, objectively, aren’t very similar…
When I hear Dawnbreaker, I feel like I’m nine years old…
…and I’ve been allowed to play OutRun on the arcade cabinet at West View
Leisure Centre. My swimming lesson has finished, and normally I should go directly home.
On those rare occasions I could get away1
with a quick pause in the lobby for a game, I’d gravitate towards the Wonderboy machine. But there was something about the tactile
controls of OutRun‘s steering wheel and pedals that gave it a physicality that the “joystick and two buttons” systems couldn’t replicate.
The other thing about OutRun was that it always felt… fast. Like, eye-wateringly fast. This was part of what gave it such appeal2.
OutRun‘s main theme, Magical Sound Shower, doesn’t actually sound much like Dawnbreaker. But
both tracks somehow feel like… “driving music”?
But somehow when I’m driving or cycling and it this song comes on, I’m instantly transported back to those occasionally-permitted childhood games of OutRun4.
When I start a new Ruby project, I feel like I’m eleven years old…
It’s not quite a HELLO WORLD, but it’s pretty-similar.
At first I assumed that the tedious bits and the administrative overhead (linking, compiling, syntactical surprises, arcane naming conventions…) was just what “real”, “grown-up”
programming was supposed to feel like. But Ruby helped remind me that programming can be fun for its own sake. Not just because of the problems you’re solving or the product
you’re creating, but just for the love of programming.
The experience of starting a new Ruby project feels just like booting up my Amstrad CPC and being able to joyfully write code that will just work.
I still learn new programming languages because, well, I love doing so. But I’m yet to find one that makes me want
to write poetry in it in the way that Ruby does.
When I hear In Yer Face, I feel like I’m thirteen years old…
…and I’m painting Advanced HeroQuest miniatures6 in the attic at my dad’s house.
I’ve cobbled together a stereo system of my very own, mostly from other people’s castoffs, and set it up in “The Den”, our recently-converted attic7,
and my friends and I would make and trade mixtapes with one another. One tape began with 808 State’s In Yer Face8,
and it was often the tape that I would put on when I’d sit down to paint.
Advanced HeroQuest came with some fabulously ornate secondary components, like the doors that were hinged so their their open/closed state could be toggled, and I spent
way too long painting almost the entirety of my base set.
In a world before CD audio took off, “shuffle” wasn’t a thing, and we’d often listen to all of the tracks on a medium in sequence9.
That was doubly true for tapes, where rewinding and fast-forwarding took time and seeking for a particular track was challenging compared to e.g. vinyl. Any given song would loop around
a lot if I couldn’t be bothered to change tapes, instead just flipping again and again10.
But somehow it’s whenever I hear In Yer Face11
that I’m transported right back to that time, in a reverie so corporeal that I can almost smell the paint thinner.
When I see a personal Web page, I (still) feel like I’m fifteen years old…
…and the Web is on the cusp of becoming the hot “killer application” for the Internet. I’ve been lucky enough to be “online” for a few years by now12,
and basic ISP-provided hosting would very soon be competing with cheap, free, and ad-supported services like Geocities to be “the
place” to keep your homepage.
Nowadays, even with a hugely-expanded toolbox, virtually every corporate homepage fundamentally looks the same:
Logo in the top left
Search and login in the top right, if applicable
A cookie/privacy notice covering everything until you work out the right incantation to make it go away without surrendering your firstborn child
A “hero banner“
Some “below the fold” content that most people skip over
A fat footer with several columns of links, to ensure that all the keywords are there so that people never have to see this page and the search engine will drop
them off at relevant child page and not one of their competitors
Finally, a line of icons representing various centralised social networks: at least one is out-of-date, either because (a) it’s been renamed, (b) it’s changed its
branding, or (c) nobody with any moral fortitude uses that network any more14
But before the corporate Web became the default, personal home pages brought a level of personality that for a while I worried was forever dead.
2 Have you played Sonic Racing: CrossWorlds? The first time I played it I was overwhelmed by the speed and colours of the
game: it’s such a high-octane visual feast. Well that’s what OutRun felt like to those of us who, in the 1980s, were used to much-simpler and slower arcade games.
3 Also, how cool is it that Metrik has a blog, in this day and age? Max props.
4 Did you hear, by the way, that there’s talk of a movie adaptation of OutRun, which could turn out to be the worst
videogame-to-movie concept that I’ll ever definitely-watch.
5 In very-approximate order: C, Assembly, Pascal, HTML, Perl, Visual Basic (does that even
count as a “grown-up” language?), Java, Delphi, JavaScript, PHP, SQL, ASP (classic, pre-.NET), CSS, Lisp, C#, Ruby, Python (though I didn’t get on with it so well), Go, Elixir… plus
many others I’m sure!
6 Or possibly they were Warhammer Quest miniatures by this point; probably this memory spans one, and also the other, blended together.
7 Eventually my dad and I gave up on using the partially-boarded loft to intermittently
build a model railway layout, mostly using second-hand/trade-in parts from “Trains & Transport”, which was exactly the nerdy kind of model shop you’re imagining right now: underlit
and occupied by a parade of shuffling neckbeards, between whom young-me would squeeze to see if the mix-and-match bin had any good condition HO-gauge flexitrack. We converted the
attic and it became “The Den”, a secondary space principally for my use. This was, in the most part, a concession for my vacating of a large bedroom and instead switching to the
smallest-imaginable bedroom in the house (barely big enough to hold a single bed!), which in turn enabled my baby sister to have a bedroom of her own.
8 My copy of In Yer Face was possibly recorded from the radio by my friend ScGary, who always had a tape deck set up with his finger primed close to the record key when the singles chart came on.
9 I soon learned to recognise “my” copy of tracks by their particular cut-in and -out
points, static and noise – some of which, amazingly, survived into the MP3 era – and of course the tracks that came before or after them, and
there are still pieces of music where, when I hear them, I “expect” them to be followed by something that they used to some mixtape I listened to a lot 30+ years
ago!
10 How amazing a user interface affordance was it that playing one side of an audio
cassette was mechanically-equivalent to (slowly) rewinding the other side? Contrast other tape formats, like VHS, which were one-sided and so while rewinding there was
literally nothing else your player could be doing. A “full” audio cassette was a marvellous thing, and I especially loved the serendipity where a recognisable “gap” on one
side of the tape might approximately line-up with one on the other side, meaning that you could, say, flip the tape after the opening intro to one song and know that you’d be
pretty-much at the start of a different one, on the other side. Does any other medium have anything quite analogous to that?
11 Which is pretty rare, unless I choose to put it on… although I did overhear it
“organically” last summer: it was coming out of a Bluetooth speaker in a narrowboat moored in the Oxford Canal near Cropredy, where I was using the towpath to return from a long walk to nearby Northamptonshire where I’d been searching for a geocache. This was a particularly surprising
place to overhear such a song, given that many of the boats moored here probably belonged to attendees of Fairport’s Cropredy Convention, at which – being a folk music festival – one
might not expect to see significant overlap of musical taste with “Madchester”-era acid house music!
12 My first online experiences were on BBS systems, of which my very first was on a
mid-80s PC1512 using a 2800-baud acoustic coupler! I got onto the Internet at a point in the early 90s at which the Web
existed… but hadn’t yet demonstrated that it would eventually come to usurp the services that existed before it: so I got to use Usenet, Gopher, Telnet and IRC before I saw
my first Web browser (it was Cello, but I switched to Netscape Navigator soon after it was released).
13 On the rare occasion I close my browser, these days, it re-opens with whatever
hundred or so tabs I was last using right back where I left them. Gosh, I’m a slob for tabs.
14 Or, if it’s a Twitter icon: all three of these.
15 Of course, they’re harder to find. SEO-manipulating behemoths dominate the search
results while social networks push their “apps” and walled gardens to try to keep us off the bigger, wider Web… and the more you cut both our of your online life, the calmer and
happier you’ll be.
This weekend, I received my copy of DOCTYPE, and man: it feels like a step back to yesteryear to type in a computer program from a
magazine: I can’t have done that in at least thirty years.
So yeah, DOCTYPE is a dead-tree (only) medium magazine containing the source code to 10 Web pages which, when typed-in to your computer, each provide you with some kind of fun and
interactive plaything. Each of the programs is contributed by a different author, including several I follow and one or two whom I’m corresponded with at some point or another, and each
brings their own personality and imagination to their contribution.
I opted to start with Stuart Langridge‘s The Nine Pyramids, a puzzle game about trying to connect all nodes in a 3×3 grid in a
continuous line bridging adjacent (orthogonal or diagonal) nodes without visiting the same node twice nor moving in the same direction twice in a row (that last provision is described
as “not visiting three in a straight line”, but I think my interpretation would have resulted in simpler code: I might demonstrate this, down the line!).
The puzzle actually made me stop to think about it for a bit, which was unexpected and pleasing!
Per tradition with this kind of programming, I made a couple of typos, the worst of which was missing an entire parameter in a CSS conic-gradient() which resulted in the
majority of the user interface being invisible: whoops! I found myself reminded of typing-in the code for Werewolves and
Wanderer from The Amazing Amstrad Omnibus, whose data section – the part most-liable to be affected by a typographic bug without introducing a syntax error – had
a helpful “checksum” to identify if a problem had occurred, and wishing that such a thing had been possible here!
But thankfully a tiny bit of poking in my browser’s inspector revealed the troublesome CSS and I was able to complete the code, and then the puzzle.
I’ve really been enjoying DOCTYPE, and you can still buy a copy if you’d like one of your own. It manages to simultaneously feel both fresh and nostalgic,
and that’s really cool.
The younger child and I were talking about maths on the school run this morning, and today’s topic was geometry. I was pleased to discover that he’s already got a reasonable
comprehension of the Pythagorean Theorem1:
I was telling him that I was about his age when I first came across it, but in my case I first had a practical, rather than theoretical, impetus to learn it.
It was the 1980s, and I was teaching myself Dr. Logo, Digital Research‘s implementation of the Logo programming language (possibly from this book). One day, I was writing a program to draw an indoor scene, including a window
through which a mountain would be visible. My aim was to produce something like this:
My window was 300 “steps”2
tall by 200 steps wide and bisected in both directions when I came to make my first attempt at the mountain.
And so, naively, starting from the lower-left, I thought I’d need some code like this:
RIGHT 45
FORWARD 100
RIGHT 90
FORWARD 100
But what I ended up with was this:
Hypotenuse? More like need-another-try-potenuse.
I instantly realised my mistake: of course the sides of the mountain would need to be longer so that the peak would reach the mid-point of the window and the far side
would hit its far corner. But how much longer ought it to be.
I intuited that the number I’d be looking for must be greater than 100 but less than 250: these were, logically, the bounds I was working within. 100 would be correct if my
line were horizontal (a “flat” mountain?), and 250 was long enough to go the “long way” to the centrepoint of the window (100 along, and 150 up). So I took a guess at 150 and… it was
pretty close… but still wrong:
I remember being confused and frustrated that the result was so close but still wrong. The reason, of course, is that the relationship between the lengths of the sides of a triangle
don’t scale in a 1:1 way, but this was the first time I found myself having to think about why.
So I found my mother and asked her what I was doing wrong. I’m sure it must have delighted her to dust-off some rarely-accessed knowledge from her own school years and teach me about
Pythagoras’!
The correct answer, of course, is given by:
I so rarely get to use MathML that I had to look up the syntax.
The answer, therefore, is… 141.421 (to three decimal places). So I rounded to 141 and my diagram worked!3
What made this maths lesson from my mother so memorable was that it fed a tangible goal. I had something I wanted to achieve, and I learned the maths that I
needed to get there. And now it’s impermeably etched onto my brain.
I learned the quadratic equation formula and how to perform algebraic integration by rote, and I guarantee that it’s less well-established in my long-term memory than, say, the sine and
cosine rules or how to solve a simultaneous equation because I’ve more-often needed to do those things outside of the classroom!
So I guess the lesson is that I should be trying to keep an eye out for practical applications of maths that I can share with my kids. Real problems that are interesting to solve, to
help build the memorable grounding that latter supports the more-challenging and intangible abstract maths that they may wish to pursue later.
Both kids are sharp young mathematicians, and the younger one seems especially to enjoy it, so feeding that passion feels well-worthwhile. Perhaps I should show them TRRTL.COM so they can try their hand at Logo!
2 Just one way that Logo is/was a cute programming language was its use of “steps” – as
in, turtle-steps – to measure distances. You might approximate them as pixels, but a “step” has meaning even for lines that don’t map linearly to pixels because they’re at wonky
angles, for example.
3 I’d later become unstuck by rounding, while trying to make a more-complex diagram with a
zig-zag pattern running along a ribbon: a small rounding error became compounded over a long time and lead to me being a couple of pixels off where I intended. But that’s another
story.
I nerdsniped myself today when, during a discussion on the potential location of a taekwondo tournament organised by our local martial arts school, somebody claimed that Scotland would be “nearer”
than Ireland.
I don’t dispute that somebody living near me can get to Scotland faster than Ireland, unless they can drive at motorway speeds across Wales… and the Irish Sea. But the word
they used was nearer, and I can be a pedantic arse.
But the question got me thinking:
Could I plot a line across Great Britain, showing which parts are closer to Scotland and which parts are closer to Ireland?
If the England-facing Irish and Scottish borders were completely straight, one could simply extend the borders until they meet, bisect the angle, and we’d be done.
Of course, the borders aren’t straight. They also don’t look much like this. I should not draw maps.
In reality, the border between England and Scotland is a winding mess, shaped by 700 years of wars and treaties1.
Treating the borders as straight lines is hopelessly naive.
Voronoi diagrams are pretty, and cool, and occasionally even useful! This one expands from points, but there’s no reason you can’t expand from a line (line a border!) instead.
My Python skills are pretty shit, but it’s the best tool for the job for geohacking2. And so, through a
combination of hacking, tweaking, and crying, I was able to throw together a script that produces a wonderful
slightly-wiggly line up the country.
The entire island of Ireland is used here to determine boundaries (you can tell because otherwise parts of County Antrim, in Northern Ireland, would be marked as closer to Scotland
than the Republic of Ireland: which they are, of course, but the question was about England!).
Once you’ve bisected England in this way – into parts that are “closer to Ireland” versus parts that are “closer to Scotland”, you start to spot all kinds of interesting
things3.
Like: did you know that the entire subterranean part of the Channel Tunnel is closer to Scotland than it is to Ireland… except for the ~2km closest to the UK exit.
A little further North: London’s six international airports are split evenly across the line, with Luton, Stansted and Southend closer to Scotland… and City, Heathrow and Gatwick closer
to Ireland.
The line then pretty-much bisects Milton Keynes, leaving half its population closer to Scotland and half closer to Ireland, before doing the same to Daventry, before – near Sutton
Coldfield – it drives right through the middle of the ninth hole of the golf course at the Lea Marston Hotel.
Players tee off closer to Ireland and – unless they really slice it – their ball lands closer to Scotland:
In Cannock, it bisects the cemetery, dividing the graves into those on the Scottish half and those in the Irish half:
The line crosses the Welsh border at the River Dee, East of Wrexham, leaving a narrow sliver of Wales that’s technically closer to Scotland than it is to Ireland, running up the
coastline from Connah’s Quay to Prestatyn and going as far inland as Mold before – as is the case in most of Wales – you’re once again closer to Ireland:
If you live in Flint or Mold, ask your local friends whether they live closer to Ireland or Scotland. The answer’s Scotland, and I’m confident that’ll surprise them.
I’d never have guessed that there were any parts of Wales that were closer to Scotland than they were to Ireland, but the map doesn’t lie4
Anyway: that’s how I got distracted, today. And along the way I learned a lot about geodata encoding, a little about Python, and a couple of surprising things about geography5.
2 Or, at least: it’s the one that’s most-widely used and so I could find lots of helpful
StackOverflow answers when I got stuck!
3 Interesting… if you’re specifically looking for some geographical trivia, that is!
4 Okay, the map lies a little. My program was only simple so it plotted
everything on a flat plane, failing to accommodate for Earth’s curvature. The difference is probably marginal, but if you happen to live on or very close to the red line, you might
need to do your own research!
5 Like: Chester and Rugby are closer to Scotland than they are to Ireland, and Harpenden
and Towcester are closer to Ireland than they are to Scotland! Who knew?
Scroll art is a form of ASCII art where a program generates text output in a command line terminal. After the terminal window
fills, it begins to scroll the text upwards and create an animated effect. These programs are simple, beautiful, and accessible as programming projects for beginners. The SAM is a
online collection of several scroll art examples.
Here are some select pieces:
Zig-zag, a simple periodic pattern in a dozen lines of code.
Program output is limited to text (though this could include emoji and color.)
Once printed, text cannot be erased. It can only scroll up.
But these restrictions compel creativity. The benefit of scroll art is that beginner programmers can create scroll art apps with a minimal amount of experience. Scroll art
requires knowing only the programming concepts of print, looping, and random numbers. Every programming langauge has these features, so scroll art can be created in
any programming language without additional steps. You don’t have to learn heavy abstract coding concepts or configure elaborate software libraries.
…
Okay, so: scroll art is ASCII art, except the magic comes from the fact that it’s very long and as your screen scrolls to show it, an animation effect becomes apparent. Does that make
sense?
Anyway, The Scroll Art Museum has lots of them, and they’re much better than mine. I especially love the faux-parallax effect in Skulls and Hearts, created by a “background” repeating pattern being scrolled by a number of lines slightly off from its
repeat frequency while a foreground pattern with a different repeat frequency flies by. Give it a look!