Our positions on generative AI

Like many trends in technology before it, we’re keeping an eye on artificial intelligence (AI).

AI is more of a concept, but generative AI as a general purpose technology has come to the fore due to recent developments in cloud-based computation and machine learning. Plus, technology is more widespread and available to more people, so more people are talking about generative AI – compared to something even more ubiquitous like HTML.

Given the hype, it feels worthwhile stating our positions on generative AI – or as we like to call it, ‘applied statistics’. We’re open to working on and with it, but there’s a few ideas we’ll bring to the table.

The positions

  1. Utility trumps hyperbole
  2. Augmented not artificial intelligence
  3. Local and open first
  4. There will be consequences
  5. Outcomes over outputs

Utility trumps hyperbole

The fundamental principle to Boring Magic’s work is that people want technologies to work. People prefer things to be functional first; the specific technologies only matter when they reduce or undermine the quality of the utility.

There are outsized, unfounded claims being made about the utility of AI. It is not ‘more profound than fire’. The macroeconomic implications of AI are often overstated, but it’ll still likely have an impact on productivity.

We think it’s sensible to look at how generative AI can be useful or make things less tedious, so we’re exploring the possibilities: from making analysis more accessible through to automating repeatable tasks.

We won’t sell you a bunch of hype, just deliver stuff that works.

Augmented not artificial intelligence

Technologies have an impact on the availability of jobs. The introduction of the digital spreadsheet meant that chartered accountants could easily punch the numbers, leading to accounting clerks becoming surplus to requirements.

Jevon’s paradox teaches us that AI will lead to more work, not less. Over time accountants needed fewer clerks, but increases in financial activity have lead to a greater need for auditors. So we will still need people in jobs to do thinking, reasoning, assessing and other things people are good at.

Rather than replacing people with machines to reduce costs, technology should be used to empower human workers. We should augment the intelligence of our people, not replace it.

That means using things like large language models (LLMs) to reduce the inertia of the blank page problem, helping with brainstorming, rather than asking an LLM to write something for you. Extensive not intensive technology.

Local and open first

Right now, we’re in a hype cycle, with lots of enthusiasm, funding and support for generative AI. The boom of a hype cycle is always followed by a bust, and AI winters have been common for decades.

If you add AI to your product or service and rely on a cloud-based supplier for that capability, you could find the supplier goes into administration – or worse, enshittification, when fees go up and the quality of service plunges. And free services are monetised eventually.

But there are lots of openly-available generative text and vision models you can run on your own computer – your ‘local machine’ – breaking the reliance on external suppliers.

When exploring how to apply generative AI to a client’s problem, we’ll always use an open model and run it locally first. It’s cheaper than using a third party, and it’s more sustainable too.

It also mitigates some risks around privacy and security by keeping all data processing local, not running on a machine in a data centre. That means we can get started sooner and do a data protection impact assessment later, when necessary.

We can use the big players like OpenAI and Anthropic if we need to, but let’s go local and open first.

There will be consequences

People like to think of technology as a box that does a specific thing, but technology impacts and is impacted by everything around it. Technology exists within an ecology. It’s an inescapable fact, so we should try to sense the likely and unlikely consequences of implementing generative AI – on people, animals, the environment, organisations, policy, society and economies.

That sounds like a big project, but there are plenty of tools out there to make it easier. We’ve used tools like consequence scanning, effects mapping, financial forecasting, Four Futures and other extrapolation methods to explore risks and harms in the past.

As responsible people, it’s our duty to bring unforeseen consequences more into view, so that we can think about how to mitigate the risks or stop.

Outcomes over outputs

It feels like everyone’s doing something with generative AI at the moment, and, if you’re not, it can lead to feeling left out. But this doesn’t mean you have to do something: FOMO is not a strategy.

We’ll take a look at where generative AI might be useful, but we’ll also recommend other technologies if those are cheaper, faster or more sustainable. That might mean implementing search and filtering instead of a chatbot, especially if it’s an interface that more people are used to.

It’s more important to get the job done and achieve outcomes, instead of doing the latest thing because it’s cool.

Let’s be pragmatic

Ultimately our approach to generative AI is like any other technology: we’re grounded in practicality, mindful of being responsible and ethical, and will pursue meaningful outcomes. It’s the best way to harness its potential effectively.

Beware the AI snake oil.

· artificial intelligence, generative ai, utility, outcomes, reponsible tech, ethics, open source