Once You’re Asking the Right Question, You Don’t Need To Ask!

Folks at work have been encouraging to make more use of generative AI in my workflow1; going beyond my current “fancy autocomplete” use and giving my agents more autonomy. My experience of such “vibe coding” so far has been… mixed2, but I promised I’d revisit it.

One thing that these models are usually effective at is summarisation3. This is valuable if you’re faced with a large and unfamiliar codebase and you’re looking to trace a particular thing but you’re not certain where it is or what it’ll be called. While they’re not always fast, these tools can at least work in the background, which allows the developer to get on with something else while the agent trawls logs, code, and configuration to find and explain a fuzzily-defined thing.

Recently, I had a moment which I thought might be such an instance… but it didn’t turn out quite the way I expected. Here’s the story4:

The broken dev env

I’d been drafted into an established and ongoing project to provide more hands, following a coworker’s departure last week. This project touches parts of our (sprawling, microsevices-based) infrastructure that I hadn’t looked at before, so there was a lot I didn’t yet know.

I picked an issue that had belonged to my former colleague that QA had rejected and set out to retrace their steps: to replicate the problem that the QA engineers had identified and in doing so learn more about the underlying process.  I spun up my development environment and tried to follow the steps.

A popup error message saying "Oops, something went wrong. Please try again."
The process failed… but much earlier than QA had said it would. Clearly my development environment was at fault, or at least not representative of their setup.

But I couldn’t even get as far as their problem before my frontend barfed out an error message. Sigh! Probably there’s some configuration I’ve missed somewhere in the myriad microservices, or else the data I’m testing with isn’t a fair reflection on what they’re doing as-standard.

Following some staff changes, I have no teammates on this side of the Atlantic who could help me decipher this: a “quick question on Slack” wouldn’t solve this one until hours from now. It was time to start debugging!

But… maybe Claude could help? It’s got access to almost all the same code, logs, tools and browser windows I do. I started typing:

✨ What’s up next, Dan?

In my development environment for https://service.dev/asset/new, when I click “Save”, I see the error “Oops, something went wrong.” Why?

Context is key

It’s quite possible that Claude would have gone away, had a “think”, done some tests, and then come back to me with a believable answer. It might even have been correct, and I’d have been able to short-cut my way back to productivity (and I’d have time to make a mug of coffee and finish reading my emails while it did so). Then, I’d just have to check that it was right, make the change, and get on with things.

But I realised that it’d probably work faster (and cheaper, and using less energy) if it had slightly more context from the get-go, so I elaborated. The first thing I’d want to know if I were debugging this is what was actually happening behind the scenes. I dipped into my browser’s Network debugger and extracted the relevant output, adding it to my prompt:

✨ What’s up next, Dan?

In my development environment for https://service.dev/asset/new, when I click “Save”, I see the error “Oops, something went wrong.” Why?The payload POSTed to the server is { content: 'test1', audience: [ 'one' ], status: 'draft' } and the response is a HTTP 500 with the following stack trace: pasted 94 lines

That’s more like it, now I could let it get on with its work. But wait…

Rubberducking

There’s a concept in computer programming called “rubberducking”. The name comes from an anecdote in The Pragmatic Programmer about a developer who, when stuck on a problem, would explain the code line-by-line to a rubber duck. The thinking is that talking-through a problem, even to someone (or something) who doesn’t understand it, can lead the speaker to insights they were otherwise missing.

I’ve done it myself many, many times: recruiting a convenient colleague or friend and talking them through the technical problem I was faced with, and inviting them to ask me to go into greater detail if I seemed to be skimming over anything, and I can promise that it can work.

A witch is happy and proud of her invention - a rubber duck. She explains to her friend: I just figured that formulating my questions out loud helps me to solve them, and finally that's all I needed.
I discovered Mini Fantasy Theater recently and loved this episode from its backlog.

The panel above is part of a series in which a sorceress called Cepper who’s coerced by her university into using Avian Intelligence (“AI”) – a robotic parrot5 that her headmaster insists is the future of magic. She experiments with it, finds it occasionally useful but more-often frustrating, attempts to implement her own local version but find that troublesome in different ways, and eventually settles on using an inanimate rubber duck instead. I get it, Cepper!

Let’s put that distraction aside for a moment and get back to the story of my broken development environment.

Clues in the stack trace

The top entry in the stack trace was an unsuccessful call to a different microservice, so I figured I’d pull its logs too, in order to further help direct the AI in the right direction6:

✨ What’s up next, Dan?

In my development environment for https://service.dev/asset/new, when I click “Save”, I see the error “Oops, something went wrong.” Why?The payload POSTed to the server is { content: 'test1', audience: [ 'one' ], status: 'draft' } and the response is a HTTP 500 with the following stack trace: pasted 94 linesThe stack trace suggests that a call is being made to the dojo backend service, where the following error log looks relevant: pasted 9 lines

I haven’t tried it, but I’m pretty confident that the LLM, after much number-crunching and a little warming-up of some datacentre somewhere, would get to the answer. But again, I found something niggling inside me: the second-from top line in the dojo logs suggested that a connection was being made to a further, deeper microservice.

I should pull its logs too, I figured.

The final puzzle piece

As an aide mémoire – in a way I’ve taken to doing when taking notes or when talking to AI – I first typed what I was going to provide. This is useful if, for example, somebody distracts me at a key moment: it means you’ve got a jumping-off point predefined by my past self:

✨ What’s up next, Dan?

In my development environment for https://service.dev/asset/new, when I click “Save”, I see the error “Oops, something went wrong.” Why?The payload POSTed to the server is { content: 'test1', audience: [ 'one' ], status: 'draft' } and the response is a HTTP 500 with the following stack trace: pasted 94 linesThe stack trace suggests that a call is being made to the dojo backend service, where the following error log looks relevant: pasted 9 lines. It’s calling osiris, which says:

I dipped into the directory for

osiris , and before I even got to the logs I spotted a problem: that microservice was on an old feature branch. How odd! I switched to the main branch and… everything started working.

The entire event took only a few minutes. I’d find some information, type it into Claude’s input field, realise that more information could be valuable, and repeat.

By the time I’d finished describing the problem, I’d discovered the solution. That’s the essence of successful rubberducking. I didn’t need the AI at all. All I needed was the illusion of something that might be able to help if I just talked through what I was thinking.

I don’t know what the moral is, here.

I wonder if I’d have been as effective had I just typed into my text editor. I suppose I would have, but I wonder if I’d have been motivated to do so in the first place? I’ve tried rubberducking before by talking to an imaginary person, but I’ve never tried typing to one7; maybe I should start?

Footnotes

1 I’m pretty sure every engineering department nowadays has it’s rabid fanboys, but I’m pleased that for the most part my colleagues take a more-pragmatic and realistic outlook: balancing the potential benefits of LLM-assisted coding with its many shortfalls, downsides, and risks.

2 My experience of vibe-coding in a nutshell: LLMs are great at knocking out the easy 80% of any engineering problem, but often in a way that makes the remaining 20% – already the hard part – harder than it would have been if a human had done the first 80% (especially if it’s the same human and they can bring their learnings with them)… and I’m definitely not the only one who’s found that. I also suspect that the unsatisfying and unimproving task of shepherding a flock of agents to write code and then casually reviewing it is not significantly more-productive (which research backs up) and results in a significantly increased regression rate… but I’m ready to be proven wrong when more studies come out. In short: I continue to think that GenAI isn’t useless, but neither is it necessarily always worthwhile.

3 So long as what you’ve got them summarising is something you can later verify!

4 I’ve taken huge liberties with the strict factual accuracy to make this more-readable as well as to to not-expose things I probably oughtn’t. So before you swoop in to criticise my prompt-fu (not that I asked you, but I know there’s somebody out there who’s thinking about doing this right now), please note that none of the text in this page are what I actually wrote to the AI; it’s a figurative example.

5 A literal stochastic parrot, one might say!

6 I’d had an experience just the previous week in which it’d gone off on completely the wrong track, attempting to change code in order to “fix” what was ultimately a configuration or data problem, and so I thought it might be useful to give it some rails to follow, to start with.

7 Except insofar as this AI agent is an “imaginary person”, which it possibly already a step-too-far in implying personhood for my liking!

×

What breaks?

What breaks when one of your developers leaves?

On Friday, I said goodbye to a colleague as she left us after most of a decade with the company. Then this morning, all hell broke loose on some production servers.

DataDog graph marked to show a co-worker's departure on Friday evening being immediately followed by a flurry of errors which then stopped for the weekend before coming back in force on Monday morning.

It turns out that the API key that connected our application to our feature flag management platform was associated with her account, and hadn’t shown up in the exit audit.

Let this be your reminder to go check where, if anywhere, your applications are using person-specific keys where they should be using generic ones!

×

My Salary History

Jeremy Keith posted his salary history last week. I absolutely agree with him that employers exploit the information gap created by opaque salary advertisement, and I think that our industry of software engineering is especially troublesome for this.

So I’m joining him (and others) in choosing to share my salary history. I’ve set up a new page for that purpose, but here’s the summary of its initial state:

Understand

A few understandings and caveats:

  • For most of my career I’ve described myself as a “Full-Stack Web Applications Developer”, but I’ve worked outside of every one of those words and my job titles have often been more like “CMS Developer” or “Senior Engineer (Security)”.
  • My specialisms and “hot areas” are security engineering, web standards, performance, and accessibility.
  • When I worked multiple roles in a year, I’ve tried to capture that, but there’ll be some fuzziness around the edges.
  • The salaries are rounded slightly to make nice readable numbers.
  • I’ve not always worked full-time; all salaries are translated into “full-time equivalent”1.
  • I’ve only included jobs that fit into my software engineering career2.
  • If the table below looks out-of-date then I’ve probably just forgotten to update it. Let me know!

History

Year Employer Salary Notes
2025 – 2026 Firstup £80,000 Remote. + Stock (spread over four years).
2024 – 2025 Automattic £111,000 Remote. + Stock (one-off bonus, worth ~£6,000).
2023 Automattic £103,000 Remote.
2021 – 2022 Automattic £98,000 Remote
2019 – 2020 Automattic £89,000 Remote.
2015 – 2019 Bodleian Libraries + Freelance £39,000 Hybrid.
2011 – 2014 Bodleian Libraries £36,000 Practices salary transparency! ❤️
2010 – 2011 SmartData + Freelance £26,000 Remote.
2007 – 2009 SmartData £24,000
2004 – 2006 SmartData £19,500
2002 – 2003 SmartData £16,500 Alongside full-time study.
2002 CTA £18,000
2001 Freelance £4,500 Ad-hoc and hard to estimate.
Alongside full-time study.

What does that look like?

I drew a graph, but I don’t like it. Mostly because I don’t see my salary as a “goal” to aim for or some kind of “score”.

It’s gone up; it’s gone down; but I’ve always been more-motivated by what I’m working on, with whom, and for what purpose than I have been on how much I get paid for it3. But if you want to see:

Graph showing my salary history: the same data as is in the table above.

I’m not sure to what degree my career looks typical or not. But I guess I also don’t care! My motivations are probably different than most (a little-more idealistic, a little-less capitalistic), I’d guess.

Footnotes

1 i.e. what I’d have earned if I had worked full-time

2 That summer back in college that I worked in a factory building striplight fittings doesn’t appear, for example!

3 Pro-tip if you’re looking at my CV and pitching me an opportunity: mention what you expect to pay, sure, but if you’re trying to win me over then tell me about the problems I’ll be solving and how that’ll make the world a better place. That’s how you motivate me to accept your offer!

×

To really foul things up you need an AI

Today, an AI review tool used by my workplace reviewed some code that I wrote, and incorrectly claimed that it would introduce a bug because a global variable I created could “be available to multiple browser tabs” (that’s not how browser JavaScript works).

Just in case I was mistaken, I explained to the AI why I thought it was wrong, and asked it to explain itself.

To do so, the LLM wrote a PR to propose adding some code to use our application’s save mechanism to pass the data back, via the server, and to any other browser tab, thereby creating the problem that it claimed existed.

This isn’t even the most-efficient way to create this problem. localStorage would have been better.

So in other words, today I watched an AI:
(a) claim to have discovered a problem (that doesn’t exist),
(b) when challenged, attempt to create the problem (that wasn’t needed), and
(c) do so in a way that was suboptimal.

Humans aren’t perfect. A human could easily make one of these mistakes. Under some circumstances, a human might even have made two of these mistakes. But to make all three? That took an AI.

What’s the old saying? “To err is human, but to really foul things up you need a computer.”

Mocking SharePoint

Highlight of my workday was debugging an issue that turned out to be nothing like what the reporter had diagnosed.

The report suggested that our system was having problems parsing URLs with colons in the pathname, suggesting perhaps an encoding issue. It wasn’t until I took a deep dive into the logs that I realised that this was a secondary characteristic of many URLs found in customers’ SharePoint installations. And many of those URLs get redirected. And SharePoint often uses relative URLs when it sends redirections. And it turned out that our systems’ redirect handler… wasn’t correctly handling relative URLs.

It all turned into a hundred line automated test to mock SharePoint and demonstrate the problem… followed by a tiny two-line fix to the actual code. And probably the most-satisfying part of my workday!

Peripheral Vision

As I lay in bed the other night, I became aware of an unusually-bright LED, glowing in the corner of my room1. Lying still in the dark, I noticed that as I looked directly at the light meant that I couldn’t see it… but when I looked straight ahead – not at it – I could make it out.

Animated illustration showing how an eyeball that rotates to face a light source can have that light obstructed by an intermediary obstacle, but when it looks "away" some of the light can hit the pupil as a consequence of its curved shape now appearing "above the horizon" of the obstacle.
In my bedroom the obstruction was the corner of my pillow, not a nondescript black rectangle. Also: my eyeball was firmly within my skull and not floating freely in a white void.

This phenomenon seems to be most-pronounced when the thing you’re using a single eye to looking at something small and pointlike (like an LED), and where there’s an obstacle closer to your eye than to the thing you’re looking at. But it’s still a little spooky2.

It’s strange how sometimes you might be less-able to see something that you’re looking directly at… than something that’s only in your peripheral vision.

I’m now at six months since I started working for Firstup.3 And as I continue to narrow my focus on the specifics of the company’s technology, processes, and customers… I’m beginning to lose a sight of some of the things that were in my peripheral vision.

Dan, a white man with blue hair, wears headphones and a grey 'Firstup' hoodie, holding a 'Firstup'-branded shoebox.
I’ve not received quite so many articles of branded clothing and other swap from my new employer as I did from my previous, but getting useful ‘swag’ still feels cool.

I’m a big believer in the idea that folks who are new to your group (team, organisation, whatever) have a strange superpower that fades over time: the ability to look at “how you work” as an outsider and bring new ideas. It requires a certain boldness to not just accept the status quo but to ask “but why do we do things this way?”. Sure, the answer will often be legitimate and unchallengeable, but by using your superpower and raising the question you bring a chance of bringing valuable change.

That superpower has a sweet spot. A point at which a person knows enough about your new role that they can answer the easy questions, but not so late that they’ve become accustomed to the “quirks” that they can’t see them any longer. The point at which your peripheral vision still reveals where there’s room for improvement, because you’re not yet so-focussed on the routine that you overlook the objectively-unusual.

I feel like I’m close to that sweet spot, right now, and I’m enjoying the opportunity to challenge some of Firstup’s established patterns. Maybe there are things I’ve learned or realised over the course of my career that might help make my new employer stronger and better? Whether not not that turns out to be the case, I’m enjoying poking at the edges to find out!

Footnotes

1 The LED turned out to be attached to a laptop charger that was normally connected in such a way that it wasn’t visible from my bed.

2 Like the first time you realise that you have a retinal blind spot and that your brain is “filling in” the gaps based on what’s around it, like Photoshop’s “smart remove” tool is running within your head.

3 You might recall that I wrote about my incredibly-efficient experience of the recruitment process at Firstup.

× ×

Note #26999

In my first few weeks at my new employer, my code contributions have added 218 lines of code, deleted 2,663. Only one of my PRs has resulted in a net increase in the size of their codebases (by two lines).

GitHub change summary showing +218 lines added, -2,663 lines removed.

I need to pick up the pace if I’m going to reach the ultimate goal of deleting ALL of the code within my lifetime. (That’s the ultimate aim, right?)

×

Dan Has Too Many Monitors

My new employer sent me a laptop and a monitor, which I immediately added to my already pretty-heavily-loaded desk. Wanna see?

It’s a video. Click it to play it, of course.

Firstup Day 1

Off to my first day at Firstup. Gotta have an induction: get my ID badge, learn where the toilets are, how to refill the coffee machine, and all that jazz.

Except, of course, none of those steps will be part of my induction. Because, yet again, I’ve taken a remote-first position. I’m 100% sold that, for me, remote/distributed work helps me bring my most-productive self. It might not be for everybody, but it’s great for me.

And now: I’m going to find out where the water cooler is. No, wait… some other thing!

Firstup Recruitment

In a little over a week I’ll be starting my new role at Firstup, who use some of my favourite Web technologies to deliver tools that streamline employee communication and engagement.

I’m sure there’ll be more to say about that down the line, but for now: let’s look at my recruitment experience, because it’s probably the fastest and most-streamlined technical recruitment process I’ve ever experienced! Here’s the timeline:

Firstup Recruitment Timeline

  • Day 0 (Thursday), 21:18 – One evening, I submitted an application via jobs listing site Welcome To The Jungle. For comparison, I submitted an application for a similar role at a similar company at almost the exact same time. Let’s call them, umm… “Secondup”.
  • 21:42 – I received an automated response to say “Firstup have received your application”. So far, so normal.
  • 21:44 – I received an email from a human – only 26 minutes after my initial application – to invite me to an initial screener interview the following week, and offering a selection of times (including a reminder of the relative timezone difference between the interviewer and I).
  • 21:55 – I replied to suggest meeting on Wednesday the following week1.
  • Day 6 (Wednesday), 15:30 – Half-hour screener interview, mostly an introduction, “keyword check” (can I say the right keywords about my qualifications and experience to demonstrate that, yes, I’m able to do the things they’ll need), and – because it’s 2025 and we live in the darkest timeline – a confirmation that I was a real human being and not an AI2. The TalOps person, Talia, says she’d like to progress me to an interview with the person who’d become my team lead, and arranges the interview then-and-there for Friday. She talked me through all the stages (max points to any recruiter who does this), and gave me an NDA to sign so we could “talk shop” in interviews if applicable.
Dan peeps out from behind a door, making a 'shush' sign with his finger near his lips. On the door is a printed sign that reads 'Interview happening in library. Please be quiet. Thanks!'
I only took the initial stages of my Firstup interviews in our library, moving to my regular coding desk for the tech tests, but I’ve got to say it’s a great space for a quiet conversation, away from the chaos and noise of our kids on an evening!
  • Day 8 (Friday), 18:30 – My new line manager, Kirk, is on the Pacific Coast of the US, so rather than wait until next week to meet I agreed to this early-evening interview slot. I’m out of practice at interviews and I babbled a bit, but apparently I had the right credentials because, at a continuing breakneck pace…
  • 21:32 – Talia emailed again to let me know I was through that stage, and asked to set up two live coding “tech test” interviews early the following week. I’ve been enjoying all the conversations and the vibes so far, so I try to grab the earliest available slots that I can make. This put the two tech test interviews back-to-back, to which Ruth raised her eyebrows – but to me it felt right to keep riding the energy of this high-speed recruitment process and dive right in to both!
  • Day 11 (Monday), 18:30 – Not even a West Coast interviewer this time, but because I’d snatched the earliest possible opportunity I spoke to Joshua early in the evening. Using a shared development environment, he had me doing a classic data-structures-and-algorithms style assessment: converting a JSON-based logical inference description sort-of reminiscent of a Reverse Polish Notation tree into something that looked more pseudocode of the underlying boolean logic. I spotted early on that I’d want a recursive solution, considered a procedural approach, and eventually went with a functional one. It was all going well… until it wasn’t! Working at speed, I made frustrating early mistake left me with the wrong data “down” my tree and needed to do some log-based debugging (the shared environment didn’t support a proper debugger, grr!) to get back on track… but I managed to deliver something that worked within the window, and talked at length through my approach every step of the way.
  • 19:30 – The second technical interview was with Kevin, and was more about systems design from a technical perspective. I was challenged to make an object-oriented implementation of a car park with three different sizes of spaces (for motorbikes, cars, and vans); vehicles can only fit into their own size of space or larger, except vans which – in the absence of a van space – can straddle three car spaces. The specification called for a particular API that could answer questions about the numbers and types of spaces available. Now warmed-up to the quirks of the shared coding environment, I started from a test-driven development approach: it didn’t actually support TDD, but I figured I could work around that by implementing what was effectively my API’s client, hitting my non-existent classes and their non-existent methods and asserting particular responses before going and filling in those classes until they worked. I felt like I really “clicked” with Kevin as well as with the tech test, and was really pleased with what I eventually delivered.
  • Day 12 (Tuesday), 12:14 – I heard from Talia again, inviting me to a final interview with Kirk’s manager Xiaojun, the Director of Engineering. Again, I opted for the earliest mutually-convenient time – the very next day! – even though it would be unusually-late in the day.
  • Day 13 (Wednesday), 20:00 –  The final interview with Xiaojun was a less-energetic affair, but still included some fun technical grilling and, as it happens, my most-smug interview moment ever when he asked me how I’d go about implementing something… that I’d coincidentally implemented for fun a few weeks earlier! So instead of spending time thinking about an answer to the question, I was able to dive right in to my most-recent solution, for which I’d conveniently drawn diagrams that I was able to use to explain my architectural choices. I found it harder to read Xiaojun and get a feel for how the interview had gone than I had each previous stage, but I was excited to hear that they were working through a shortlist and should be ready to appoint somebody at the “end of the week, or early next week” at the latest.
Illustration of a sophisticated Least Recently Used Cache class with a doubly-linked list data structure containing three items, referenced from the class by its head and tail item. It has 'get' and 'put' methods. It also has a separate hash map indexing each cached item by its key.
This. This is how you implement an LRU cache.
  • Day 14 (Thursday), 00:09 – At what is presumably the very end of the workday in her timezone, Talia emailed me to ask if we could chat at what must be the start of her next workday. Or as I call it, lunchtime. That’s a promising sign.
  • 13:00 – The sun had come out, so I took Talia’s call in the “meeting hammock” in the garden, with a can of cold non-alcoholic beer next to me (and the dog rolling around on the grass). After exchanging pleasantries, she made the offer, which I verbally accepted then and there and (after clearing up a couple of quick queries) signed a contract to a few hours later. Sorted.
  • Day 23 – You remember that I mentioned applying to another (very similar) role at the same time? This was the day that “Secondup” emailed to ask about my availability for an interview. And while 23 days is certainly a more-normal turnaround for the start of a recruitment process, I’d already found myself excited by everything I’d learned about Firstup: there are some great things they’re doing right; there are some exciting problems that I can be part of the solution to… I didn’t need another interview, so I turned down “Secondup”. Something something early bird.

Wow, that was fast!

With only eight days between the screener interview and the offer – and barely a fortnight after my initial application – this has got to be the absolute fastest I’ve ever seen a tech role recruitment process go. It felt like a rollercoaster, and I loved it.

Photograph of a looping steel rollercoaster against a setting sun. A logo of a flaming 'F' in the corner declares it Firstup: World's First Recruitment-Themed Rollercoaster. The track is annotated with the steps of the process in order, from application to screener to interview and so on.
Is it weird that I’d actually ride a recruitment-themed rollercoaster?

Footnotes

1 The earliest available slot for a screener interview, on Tuesday, clashed with my 8-year-old’s taekwondo class which I’d promised I’ll go along and join in with it as part of their “dads train free in June” promotion. This turned out to be a painful and exhausting experience which I thoroughly enjoyed, but more on that some other time, perhaps.

2 After realising that “are you a robot” was part of the initial checks, I briefly regretted taking the interview in our newly-constructed library because it provides exactly the kind of environment that looks like a fake background.

× × ×

Smug Interview Moment

I’ve been in a lot of interviews over the last two or three weeks. But there’s a moment that stands out and that I’ll remember forever as the most-smug I’ve ever felt during an interview.

Close-up of part of a letter, the visible part of which reads: Dear Dan, We are pleased to offer you a position as Senior Softwa... / and reporting to the Company's Manager, Software E... / (the "Commencement Date"). You will receive an... / By accepting this offer you warrant and agree...
There’ll soon be news to share about what I’m going to be doing with the second half of this year…

This particular interview included a mixture of technical and non-technical questions, but a particular technical question stood out for reasons that will rapidly become apparent. It went kind-of like this:

Interviewer: How would you go about designing a backend cache that retains in memory some number of most-recently-accessed items?

Dan: It sounds like you’re talking about an LRU cache. Coincidentally, I implemented exactly that just the other week, for fun, in two of this role’s preferred programming languages (and four other languages). I wrote a blog post about my design choices: specifically, why I opted for a hashmap for quick reads and a doubly-linked-list for constant-time writes. I’m sending you the links to it now: may I talk you through the diagrams?

Interviewer:

'Excuse me' GIF reaction. A white man blinks and looks surprised.

That’s probably the most-overconfident thing I’ve said at an interview since before I started at the Bodleian, 13 years ago. In the interview for that position I spent some time explaining that for the role they were recruiting for they were asking the wrong questions! I provided some better questions that I felt they should ask to maximise their chance of getting the best candidate… and then answered them, effectively helping to write my own interview.

Anyway: even ignoring my cockiness, my interview the other week was informative and enjoyable throughout, and I’m pleased that I’ll soon be working alongside some of the people that I met: they seem smart, and driven, and focussed, and it looks like the kind of environment in which I could do well.

But more on that later.

× ×