AI, ethics and the end of the world as we know it or business-as-usual?

It somehow feels different this time, doesn’t it?  AR. Blockchain. IOT. Cloud. Smart Cities.  We’ve all lived through hype cycles before.  So many bold predictions.  So many disappointed – dare I say failed – expectations.  But it somehow feels different this time….

AI, of course, is not new.  Siri, Alexa, and the ubiquitous Chatbot are just a few examples of the way traditional AI, or approaches based on rules and patterns, have increasingly pervaded our day- to-day lives in ways that have undoubtedly made them easier.  Just try to imagine, for example, still needing to fumble around with a map whilst driving.

But Generative AI, and the neural network architecture ChatGPT, Bard and others are based upon is new.  New precisely because Generative AI doesn’t simply reuse existing data, it actually learns – at a rate and pace heretofore unimaginable – from the vast amounts of data it is trained on and then, as the name implies, generates new content.

And it’s here where arguably some of the thorniest ethical questions and challenges of our time come into play. If a machine can learn and create faster than a human being, how do we maintain control over a creation that knows more than us, that is ‘smarter’ than us?

How do we ensure that in the process of pursuing noble causes like curing cancer, we don’t inadvertently create a Frankensteinian Monster that, either on its own or in the wrong hands, is used for nefarious ends like impersonating world leaders, unleashing chemical weapons, or starting a global war? ChatGPT, afterall, was only launched as a prototype at the end of November 2022 and we’re already hearing about ‘black boxes’ and ‘hallucinations’ wherein technology even its creators no longer understand creates outputs that are flat out wrong.

how do we maintain control over a creation that knows more than us, that is ‘smarter’ than us?

Even if for the sake of argument, however, we take a step back from a doomsday scenario, very practical and pressing ethical issues remain. If machine learning is based on data inputs, how do we ensure that this data is factual and unbiased even if the technology is understood, remains under human control and is intended for good use.

How, in other words, do we prevent the old adage ‘garbage in, garbage out’ from resulting not just in false narratives but the falsification of history itself?  Today, the prevailing input narrative may favour your politics or views. But what prevents it from favouring views which you disagree with or believe are wrong? Or more aptly put, what do we need to do to preserve a constructive Hegelian dialect in our new world of deep machine learning?

These questions aren’t easy.  And I don’t presume to have easy answers.  But I do have to confess that after years of fretting about the dangers of government regulation stifling technological innovation, I’m reassessing.

This is not to say that I think regulation is the answer.  But rather to suggest that the risk – reward ratio, or stakes, are too high when it comes to Generative AI to simply fall back on old laissez-faire assumptions.

So if regulation, in one form or another, is needed, the questions then become to what extent and at what cost?  Not surprisingly, the world seems to be falling squarely into 3 blocks.

On the one hand, we have the US and UK only just now opening AI hearings and regulatory reviews.  The primary focus for both countries is on advancing economic growth and innovation.  The central concern, of course, is that global adversaries will add AI to their geopolitical tool kits.

How do we prevent ‘garbage in, garbage out’ from resulting not just in false narratives but the falsification of history itself?

Speaking of global adversaries, on the other hand, we have China which launched a Next Generation AI Development Plan as far back as 2017.  With 6 years of regulatory experience under its belt, China’s predominant focus is now on mandating security requirements for products using Generative AI whilst at the same time ensuring transparency rights for end users.

Few believe that China will provide the same guardrails on government use of AI as it does to business – a prospect which is raising concern throughout the West, as is likelihood of China exporting its regulatory approach to the Global South and its Belt and Road Initiative partners. 

Falling squarely in the middle between two tech superblocks, we have the EU which recently announced draft regulation for the 27 member country block.  This first-of-its-kind regulatory framework aims to introduce a risk based approach to AI that balances ‘the safety and fundamental rights of people and businesses’ against a drive to ‘strengthen uptake, investment and innovation in AI.’

Whilst already being criticised in some circles for being too heavy handed and off-putting to developers and start-ups, the EU’s 4-tiered approach – which ranges from an outright prohibition on high risk usages such as social scoring by governments through to the free use of low risk usages such as spam filters – seems to offer a practical and potentially positive European 3rd way which may yet set the global standard for ethical AI.

I, for one, am routing for the EU.

the risk – reward ratio, or stakes, are too high when it comes to Generative AI to simply fall back on old laissez-faire assumptions.

Latest Insights