You are someone who works in/for the government. You are scoping a tech intervention for a policy problem. It is 2024. You decide you don't want to parachute and pitch a solution for a problem. You are all about stakeholder participation, bottom-up solutioning and all that jazz.
So you go and talk to the lead bureaucrat. Tea is ordered. You ask for green tea thinking black coffee might bring you more attention than you need. You want to center the stakeholder. 2024, baby. They see you have some leverage (either donor money, deep tech talent etc). They start telling you issues from the top of their minds where tech can be applied. You have their attention while files with more complex issues are put aside. You sip your green tea. You are abhorred that it came with sugar. Before you know it, the discussion is no longer about problems but about possible solutions.
'Agar aisa kuch banayege, toh accha ho jata' (If you made something like this, it would be really good)
Wishes are made. Fantasies are fantasized.
You go back feeling good about the meeting. You have a way forward, this is something they want you to build, it is an internal ask, stakeholder be participating.
In the very off-chance you end up actually building said thing, it is highly likely it wont be used.
Why so?
I don't mean to be patronizing and say the user doesn't know what she wants. Depending on where the user is on the decision-maker vs decision-implementer axis, she might have useful hints.
But a pattern I've repeatedly seen in conversations between head bureaucrats and technocrats (you) is that solutioning takes the liberty to be in the realm of fantasies or outsized ambition. Often bureaucrats will pitch more riskier, loftier or ambitious problem statements to people who have leverage (external capital, tech skills etc). It makes sense, you think you have internal machinery to pursue your day to day problems, you can leverage people from outside for more fancier or ambitious bets. After all, the external technocrat is unencumbered by the administrative complexities and constraints of the everyday. You'll be tasked with silver bullets that kill mythical werewolves, while the kingdom faces an ongoing murky rat-infestation.
But what should you building? Is there even a werewolf problem? Should you not be improving the rat traps?
You need to be careful after all. Proper project scoping is everything. If you don't get it right, why bother with the rest. It is already an arduous long journey to get 'anything' implemented within sarkari constraints, at least you should be convinced about the end goal. Imagine among his existing troubles, if Sisyphus was later told that the hill itself was wrong?
So what do you do to evaluate whether a solution being pitched is solving a genuine need? Will it be used by stakeholders once made? Is there an organic demand?
Here, Paul McCartney, noted civic technologist, has a thing to add:
Hey Jude
Don't make it bad
Take a sad song and make it better
Remember to let her into under your skin
Then you begin to make it better
Supplementary reading, I recommend opening in a new tab while you read the rest:
I think what he wanted to say was that if you want your tech intervention to be used, identify an existing implemented process, understand it deeply and then improve it. And yes, don't leave it worse than how you found it.
Simple and profound.
Don't try to introduce new behaviors, just take an existing behavior with proven intent and make it better.
What does it mean?
So for example, if a bureaucrat wants you to build a fancy state-of-the-art AI model to identify fraud, first see if they are already implementing rudimentary things and taking action. Are they closing the loop? Suspending people? If not, then AI isn't going to change anything. The limitation on action isn't the efficiency or scale that algorithms bring.
Or, say you are writing an algorithm that equitably distributes funding across women community groups. Is there an existing defined process where they are deciding this in a rudimentary manner?
Or, you are asked to create a solution that identifies 'activity' in classrooms using AI on the CCTV footage, but are they currently acting on audit reports on teacher quality, citizen complaints or manually auditing footage on sample and taking corrective action?
Or, you are asked to use drones to identify potholes on roads, but how robust is the existing pipeline for reporting, tracking and closing defects identified in regular manual inspections?
So if the machinery isn't acting on something already in a rudimentary or manual manner, there is little evidence it will be acted upon later when scaled and made fancier.
A key part of scoping a project is mapping what actions will be taken based on the output of your intervention. But action depends on many factors such as political economy, existing capacities, IT infrastructure etc. Many of these are not in your control or fit in your timeline of change. So it is easier to take processes where the output-action pipeline is already in place and focus on just improving the efficiency/quality of the output.
I am not being overly unimaginative. Having seen dozens of technocratic projects both successful and failed, I try to have measured ambition. This is not to say technologists shouldn't pitch projects that would possibly change behaviors. My prized projects are those. That should be the goal and exponential gains often come from such projects. But for this, you need patience (multi-year or decadal). You need to be clear that this is what you are getting into. There will be moments where the circumstances align and you'll be able to pitch a completely new behavior-inducing tech solution and even be successful but the stars need to align, and stars there are many. In the meantime, there are rat traps that can be improved.
So in summary:
Look for existing actions and not intent (relationship advice as well)
It is not what they say they'll do if they had your silver-bullet, it is what they are already doing with rat-traps and can that be improved by you.
So what do you say when someone asks you to chase the next technocratic/goal-post shifting/silver bullet?
Na-Na-Na Na-Na-Na-Na Na-Na-Na Hey Judee.
PS. More than 50% of the song is the Beatles going 'Na Na Na Na' and I think that makes 'Hey Jude' the perfect tech-scoping song. A good technocrat should be saying no more often than yes if they are doing their job well.
So true. People talk about tech, especially AI now, as if it could fill all the gaps in the process and solve all the issues on its own. And it's not just in the government sector but everywhere! People with decision-making authority get their pet projects built or products bought depending on what seems like a magical solution at the time. After spending a huge amount of time and effort, they realize that there is no adoption because the problem or bottleneck wasn't even tech to begin with.
A great read Harsh. Can relate to it a lot . But then lucky are the ones who can determine which projects to work on and which to say no to.
If you are standing between the bureaucrat and the "silver bullet" or even the donor and their "impact" and keep saying "Na , na , na" , you will most likely get replaced by the "ya, ya, ya" guys albeit with an unhappy ending . Better rat traps are not shiny enough for the donor or the bureaucrat in most cases.
The pull of the silver-bullet in systems that have a high number of constraints / competing needs is difficult to not give into