Trust, Technology and the CTO

Rhea Karuturi
9 min readOct 16, 2020
tl;dr: I’m the CTO

Recently my friend asked me what my job is — beyond the title, what is it that I actually do. I tried to explain it the best way I can: it’s all the tech stuff. I was met with silence so I tried to expand — the day to day is different, but I’m basically engaged in building tools and systems that help everybody do their job better, to create more transparency in our operations, to create checks and balances. It didn’t feel satisfactory — because yes that explains parts of what I do and it has some interesting buzzwords, but it didn’t really capture what felt like the important parts of it. I vaguely described that it was similar to my thesis topic in a sense. It’s only after saying that to another person that I really gave it more time and thought. This essay is what I came up with.

My honours thesis was in the Science, Technology and Society department so I got to pick a topic that sat at the intersection of these mammoth topics. I chose to work on an idea I came up with: Trust Infrastructures (specifically the Aadhaar, but only because I was ̶f̶o̶r̶c̶e̶d̶ guided to focus on one example).

What I meant by that term was that we have all these systems and processes and documents and tech that help us manage trust — that help us verify, control and cooperate with each other. And they can reveal very interesting beliefs that we hold about science, technology and society. I wanted to explore those beliefs, especially in the atmosphere I graduated in which was obsessed with the “trustless” future that cryptocurrency seemed to herald.

At the center of my thesis was the question of how to define trust itself — and of all the definitions I came across, the one that I found most often and the one that resonated most with me is that trust is made up of two things: goodwill and competence.

I always thought it was the goodwill that was the most important part of that, but I’ve come to realise that competence is just as important. Not competence as just the actual skills to do a job, but also the motivation to do it perfectly, and the ability to do it all the time. Competence extended to cover the ability to check the work and to have all the information required to do the task in the first place.

My final presentation, which happened in a very dark room.

In my thesis presentation, I defined infrastructure academically and then broke it down to the fundamental: an infrastructure is basically the context within which an action happens. And so when it comes to managing trust through an infrastructure, what I mean is what is the context for the various situations we are put in, when people have to work together? Are they situations that require a lot of trust, or situations that require a lot of control?

My dorm wall where I sketched out the basic structure of my thesis

For every infrastructure, I think it can be broken down into various systems, protocols and networks. But to really grasp what it means to those participating in it, it’s more useful to look at the situations of trust in the infrastructure. For Aadhaar, the primary situation I considered was getting your grain from the Kirana shop through the PDS system. Another situation was proving identity for employment by a domestic worker (from an ad for Aadhaar) and so on.

I got the idea of writing my thesis on Trust Infrastructures during the quarter I took a leave of absence from college. I was 22 and working in our family business, and for that week was stationed in Gambella, Ethiopia. It was when I was standing at a table full of binders, scanning each document with the camscanner app on my phone that I first thought of it.

Breakfast in Gambela

What I was scanning was rental agreements for our machinery — it was an idea Yeshoda had to monetise our idle machinery during the time we didn’t need it. The rental agreements were standard, but we had to follow this procedure to make sure they were safeguarded. I was scanning them because we needed to track down some machinery and digitising the records was a good safeguard.

It was not the most mentally stimulating work.

As I was scanning page after page, I was thinking about what I was doing and (as I often did that year) what qualified me to be doing this. At first glance, I think I’m allowed to say I was overqualified to be scanning documents. But more importantly, my role wasn’t about scanning rental agreements. It was more about being a part of a system put in place to run the rental business — for which I was vastly underqualified. It was a feeling I often got while in our family business — I was constantly grappling with privilege and what it means to be doing what I was doing.

But I realised that it wasn’t a skill that I was being put in this position for — it was for trustworthiness. The operative cog in the many areas I was interning in was more about oversight, verification and accountability than any hard skill. I wasn’t an expert in agriculture, but I could be relied on to go to the farm, inspect everything closely, and faithfully report back on what was happening. I wasn’t an expert on rental agreements, but I was unlikely to cut a side deal for the machinery and claim it was on site when it was really being used by someone else. In other words, I had goodwill in abundance and just enough competency to be satisfactory.

That’s what the family business secret is — that’s why power was passed within a small group. It was the hypothesis (right or wrong) that competence can be learnt, but goodwill — and consequently, trustworthiness — was in limited supply. Francis Fukuyama in 1995, in arguably the most popular academic work on trust, wrote precisely about that — how societies with low generalised trust (India was an example) tend to grow through businesses structured like the family business — and that this was the reason they had trouble growing beyond a certain limit.

The validity of his argument in today’s world aside, this is a sentiment widely echoed in the academic literature. The systems we create in the absence of trust are expensive, onerous and complex. This encapsulates anything from accounting systems to ERP systems to review meetings — all centered around managing various situations that required cooperation and therefore involved transactions in trust.

To clarify, not all of these situations are high trust. In fact, the most important distinction I made was labelling situations as high trust or the opposite — high control. But I called it a situation of trust because that is the scale it is being measured on — and that is the default we build systems to leverage or replace.

Some fun pictures of my thesis wall, which became the focal point of many photos

And what I do today is continue that work — not the scanning, but thinking about trust. When I build anything, what I’m really doing is creating systems that manage trust. In some processes I leverage trust — where there is reliable goodwill, make sure there is a system of support to ensure high competency. In other systems, I try to make them trustless — in the narrow sense that it is transparent, verifiable, and doesn’t rely on the goodwill or competence of the participants using it.

Sometimes it is less technical and more human, and then I defer to Yeshoda — i.e. where there is high competency, ensuring that abundant goodwill is created. At the end of the day, trust is a relational value — it cannot exist in a vacuum without participants. The amount of trust, the directionality — all of it matters and all of it is a part of a complex social fabric that is the subject of a lot of study (believe me, I had to read a lot of it for my literature review).

Which is why building tech informed by a deep appreciation for what trust means and how valuable it is has been so important to me. My core belief that motivated the idea of formalising what a Trust Infrastructure meant was that trust is incredibly valuable. Believing in another person, entrusting them with the power to effect our interests is a way of accepting vulnerability and of extending an invitation to kinhood.

It is only by trusting that we can be repaid in trustworthiness — and it is only by finding others to be trustworthy that we can imagine something that takes an army to build and sustain. If we think of ourselves as a unit of one and find it hard to rely on others to help us, what we can do and how effectively we can do it shrinks dramatically or becomes a function of how much control you can assert on others — both of which are fairly depressing scenarios.

Beyond the productivity as well, there is something in accepting vulnerability and extending trust that is deeply human. Being able to trust and being worthy of trust both serve important functions. Hannah Arendt and Immanuel Kant both talk about this: that making a promise about the actions of your future self elevate a person, connect them to a higher level of their humanity. Keeping those promises is what being trustworthy is all about. And experiencing your trust being rewarded is what convinces us of a shared world, where our wellbeing is connected to and cared for by others.

But there’s something I learnt working on these systems in the real world that I didn’t understand when writing my thesis. When I wrote my thesis, I found those writing for “trustless” systems — be it blockchain, panopticon structures or large scale surveillance systems — deeply flawed but also cynical. Flawed because rarely are these systems truly trustless, and cynical because they rest on the assumption that individuals, societies or institutions were not trustworthy. The first part of that I still stand by, as do most experts, but it’s the second that I’ve come to question.

If I were to rewrite the thesis now, I’d add another possible motivation for designing trustless systems: that sometimes you create a trustless system because you do believe in the trustworthiness of people. Because I’ve found that trustworthy people and teams are so valuable, that often we are creating trustless systems to manage their lower level work so we can free up more of their time.

I’m a strong believer in the idea that amazing things happen when intelligent people are bored. And that’s probably a more concise way of describing what I do — I find ways to make people bored. By automating tasks or creating transparency (or at the very least ease of access), I’m trying to give the people in our team more free time where they can be bored and ask themselves what’s next. If all your routine tasks can be completed in 1–2 hours, what do you do with the rest of your time? If you trust your team, you believe that they’re going to ask — what more can we do, what else can we do, how can we do what we already do differently? Our best ideas have come this way — all our innovations were incremental, the result of seeing one problem solved and thinking, okay what’s next?

I really think as an entrepreneur, the most humbling part of the job is that your team is trusting you with their time and energy. As people, that is all we really have at the end of the day — our time, energy and the decision of what we care about enough to direct them towards. Your team is trusting you with which direction to apply themselves to, and managing that time to get the most out of it — for them, for the company and at large, is a huge responsibility. In a way, managing their time is an extension of the same infrastructure of trust — because it’s only when people believe in your vision and buy into the things you build or processes you set up that they will participate in and grow them.

Which is a long winded answer to the simple question of what I do, but an important one to arrive at — and much more satisfactory than saying I handle all things tech (which doesn’t include fixing the printer).

Just call me Rhea “I am not IT Support” Karuturi

--

--

Rhea Karuturi

I like to read, write, code and nap. Not in that order.