Skip to main content

Using LLMs ethically

The bigger picture

A small digression, before we dig in.

Insofar as possible, this blog isn’t overly political, normally focusing on technical topics and implementations. But caught up in the zeitgeist of the times, the subject of LLM usage is inescapable.

Most people seem to care about how they can use them effectively, to maximise productivity, automate away labour, and other things. The angle of this blog is different: I want to talk about ethics.

But before we get there, we need to zoom out a little.

Can we live life virtuously?

Sure, if you can afford it.

I can choose vegetarianism on moral grounds because of extremely fortunate positions:

A child born into extreme poverty in South Sudan does not have the luxury of philosophising about the suffering of livestock. Survival comes first.

Ethics is often downstream of privilege. This raises a difficult question: should we hold everyone globally to the same ethical standards? If ethics requires surplus time, energy, money, and education, then it is unevenly distributed by default.

We should remember at all times that our moral judgments should be tempered with humility.

Which hills are we willing to die on?

We all make ethically dubious choices constantly.

How can we live with ourselves?

Because we are deeply human and life is inherently tragic.

No one can optimise for every moral dimension simultaneously. Philosophers from Aristotle to Herbert Simon have argued that the reasonable standard is not perfection, but deliberate satisficing: making the best available choice given real constraints, and owning it.

What can we realistically strive for?

So what remains?

We just do our best given the circumstances. Nobody is a saint.

This framing matters when we start talking about LLMs.

Creating an ethical usage framework for LLMs

Getting crushed in the wheels of the hype train

There is enormous pressure right now.

If you don’t use LLMs, you’re:

This narrative is powerful, especially in tech.

And to be fair, there’s something to it. Defenders of aggressive LLM adoption have a reasonable case: these tools can meaningfully reduce time spent on low-value repetitive tasks, free up human attention for higher-order work, and (in resource-constrained organisations) stretch limited capacity further.

The ethical question isn’t whether benefits can be found. But these tools can be used (i) deliberately (ii) compulsively (iii) through acquiescence.

I am advocating for more thoughtful and reflective consideration of our actions (a tough sell, considering this isn’t the default state of humanity).

Energy usage: a red herring?

Energy consumption is frequently cited as the core ethical issue.

There is something to this, as data centres consume significant power. Companies like OpenAI, Google, and Microsoft operate vast compute infrastructure to train and serve models.

But we need proportionality.

If you:

…then your LLM usage is probably not your largest environmental impact.

It doesn’t make this irrelevant, but there are deeper concerns (listed in the framework discussion linked above).

Genuine concerns

I won’t dive too deeply into this, as the linked framework doc covers in more detail.

The digital divide is perhaps the most structurally troubling issue. Access to these tools is strongly unequal. Wealthy individuals and organisations in high-income countries gain productivity advantages that compound over time, while communities without reliable internet, hardware, or the educational background to use these tools effectively fall further behind.

This isn’t a new dynamic, as it mirrors historical patterns of technological colonialism, but LLMs accelerate it. For those of us in privileged positions, benefiting from these tools without any consideration of that asymmetry is a form of indifference worth highlighting.

Other major concerns, explored in depth in the linked discussion, include:

The least bad option

The key point here is that it’s all about choosing the least bad option you can justify, and being honest about why.


Contribute to the discussion