TAPAS.network | 24 April 2023 | Editorial Opinion | Peter Stonham

Are we smart enough to deal with the implications of AI?

Peter Stonham

STONE AGE MAN, if handed a smart phone, might be bemused, intrigued – and probably concerned – but it is unlikely he would immediately say how useful it was, and how it was going to change his life. The functionality of the device would hardly match the priorities of his era – after all, it cannot hunt, cut trees down or light a fire.

Fast forward to today, and there seems to be an assumption that technology – in particular ‘intelligent technology’ – can and should take over all manner of tasks that we believe to be essential to modern life, but that take up time and effort that as individuals we could better deploy elsewhere, and that organisations want robots to perform much more cheaply.

Since the Industrial Revolution, things that were once accepted as ‘manual work’ have progressively been replaced with machines doing the job for us. With the advent of the digital world we have since entered into, the question now is how far that process of replacement should reasonably and sensibly go, when some of the functions to be passed into new non-human hands are substituting not just for physical activities, but for thinking processes and decision making too.

Conscious of this significant challenge, the Government has just published a White Paper outlining its Artificial Intelligence (AI) strategy.

Understanding the relationship of rapidly evolving new technology with long-standing human activity is probably at the core of the debate that should be now happening about the role of AI. In its latest incarnation, advanced computing, programming and processing power, offers the prospects of not only taking over tasks with which we are familiar, but of deciding what things are important, how they might best be organised and undertaken, and by doing so, therefore complementing, challenging or even displacing the structure of human’s individual or collective thinking and activity established over thousands of years.

The fire-power of this new ability is awesome. Moore’s Law told us that computing power as measured by hardware and processing capacity doubles every two years. But there is no clear law about how we will respond by using that capability in terms of the systems design and applications to take over tasks and functions that were previously manual - coupled with the continuing invention of new things that we can put our ‘Artificial Intelligence’ machines to work in doing, including learning, in moments, what it has taken humans millennia to understand and master.

Considering the points outlined above must surely be seen as a critical part of the conversation that seems most necessary about where AI can, and maybe will – whether we like it or not – take over functions that to date have required human execution, or at least supervision. For transport, these include providing the system itself advising or instructing humans how they should make their choices, and even taking away the need for people to travel personally or drive vehicles to achieve objectives.

The new AI -powered machines that are in prospect are not limited to just undertaking complex and sensitive tasks, but able to offer themselves to look after things that humans have traditionally expected to do for themselves – from cleaning, cooking and driving to problem-solving and decision-making.

As well as changing the whole landscape of society, a great number of such tasks, previously occupying huge numbers of humans, will effectively be handed over to super-intelligent machines, thereby taking away both sources of income and life-defining jobs and activities.

So far, the conversation about AI has manly seemed to focus on the direct substitution of machine activities for equivalent human ones, but it is unlikely that the unleashed power of intelligent man-made life will stop at simply acting as a functional man-like robot with a swivelling head and mechanical limbs.

There is as yet, little apparent discussion of the downstream role of artificial intelligence beyond playing Man-Friday or Jeeves, and answering their master’s every need. At a societal level, this significant resource will be at the disposal of governments, corporations and military forces - and errant individuals - whose decisions on how to deploy it are already raising important questions, though without a context for a wider community discussion about what is acceptable, and how should the risks of things getting out of control be managed.

Are tech companies - and individual inventors- moving too fast in rolling out powerful artificial intelligence technology that could one day outsmart humans?

Last month hundreds of scientists and entrepreneurs working in this field signed a warning letter asking for a pause to further development and application whilst the required conversation took place. The concern was led by a group of prominent computer scientists and other tech industry notables such as Elon Musk and Apple co-founder Steve Wozniak, who are calling for a 6-month pause to consider the risks.

Their petition is a response to San Francisco startup OpenAI’s recent release of GPT-4, a more advanced successor to its widely-used AI chatbot ChatGPT, that helped spark a race among tech giants Microsoft and Google to unveil similar applications. As well as the open letter, the authors have posted a set of policy recommendations for what should happen in the pause they are seeking. Both can be found here.

Nearly a decade ago, similar concerns were also raised in an Open Letter created in January 2015 by Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts addressing the need for more in depth research on the societal impacts of AI. They too recognised that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent potential pitfalls. Whilst artificial intelligence had the potential to eradicate disease and poverty, there was a parallel concern about the creation of something which is unsafe or uncontrollable. The letter was titled “Research Priorities for Robust and Beneficial Artificial Intelligence”, with detailed research priorities set out in an accompanying twelve-page document.

It hasn’t seemed that either those 2015 concerns, or the new ones just expressed, have led to much change in the trajectory of development of the AI field, or the governmental or societal frameworks for it.

Concerns heard so far at the spread of this new world of AI-driven daily individual consumer activity have mostly focussed on the potential marginalisation of digitally dispossessed people, or those who behaviours don’t fit in to prescribed algorithms, and become regarded as awkward outlyers that the system needs to knock back into shape. But the impacts could soon be far wider.

In the transport field, there are huge areas of current activity that are ripe for being reshaped by this multiplying level of AI capability.

A few obvious examples include: automated driving and vehicle control for road, rail and air; system controlled warehousing and logistics delivery and distribution; management of personal travel planning and optimal decision making, and maybe behaviour monitoring; recommending individual travel patterns for their economic and sustainability criteria; managing our digital transactions and personal expenditure accounts for travel; and (some may fear) specifying where and how people are authorised to move about – and the criteria that are used to say yes or no to what individuals would like to do.

As a society we won’t necessarily be able to pick and choose between ‘the good things’, that carefully applied AI can help us with, and the bad things that questionable people with access to AI get up to, or those that the machines begin to either learn they are able to do themselves, or may not have been properly designed to be only deployed under careful instruction. We are used to the implications of ‘rogue’ human behaviour, but seem as yet to be in denial that there could ever be rogue machine behaviour, or ill-intentioned instruction to the machines by devious forces of what they should be allowed to do.

If we are to avoid a future we do not want, the time to specify the necessary framework is now, before the genie is out of the bottle and we are not desperately trying to put it back in. Or our dependency on these new systems has become irreversible.

Even now, in embracing the delights of conversations with ChatGPT and the like, we are handing over many insights about ourselves by the very act of asking the questions of these third parties. And if we want these clever robots to do some work on our systems development and coding, we have to first allow them to look at the IP that we have so far created, perhaps hoping they then forget about it, rather than build it into their own (and their owners) next outputs.

Let’s not overlook the fact that these new fast-learning machines have their own creators and masters and are not by definition munificent benign beings. And that’s before they truly develop minds of their own…

Peter Stonham is the Editorial Director of TAPAS Network

This article was first published in LTT magazine, LTT867, 24 April 2023.

20230206
taster
Read more articles by Peter Stonham
Plus ça Change…
THE MESSAGE we are all hearing from the General Election campaign - at least from everyone but the current Government- is that it is time for a change. What isn’t clear, however, is how that change will play out in the real world, and especially in the world of transport, and what it will mean for the activity of those involved at the front line in planning and delivering transport systems and services.
Putting parking in its place
AT THE END OF June 2024, there were 41.7 million licensed vehicles in the UK, an increase of 1% compared with the end of June 2023. The total number of licensed vehicles has increased in all but two years (1991 & 2020) since the end of the Second World War, according to vehicle licensing statistics.
Blue sky, or mission-led? Setting the right research agenda
When budgets are squeezed, and priorities are set, some things are always going to be seen as more desirable - or expendable - than others. And that depends on your point of view. This can apply at both aggregate overall levels, and in more detailed areas of expenditure like research and development. Especially so if it is funding about conceptual and behavioural matters with ‘soft’ or uncertain outcomes, that is being considered.
Read more articles on TAPAS
Looking for the light in a dark age
THE FAREWELL SPEECH from President Joe Biden from the White House last week warned the United States (perhaps the world) that “an oligarchy is taking shape in America of extreme wealth, power, and influence that literally threatens our entire democracy, our basic rights and freedoms …. a dangerous concentration of power in the hands of a very few ultra-wealthy people”.
Reverse gear: The reality and implications of national transport emission reduction policies
Following his successful Freedom of Information application to obtain details of the Department for Transport’s assumptions about achieving Net Zero for UK Transport, Professor Greg Marsden has led a detailed analysis of that data, and other recent government policy documents. The results have just been published, revealing what the report sees as significant back-tracking on the original commitments. We invited Professor Marsden to summarise those conclusions, and add some further observations about what it all means for local transport
Do we Keep Right On to The End of The Road... even if it isn’t making sense any more?
TWO HIGHLY EXPENSIVE National Highways road schemes have come to the forefront of professional discussion this week, with serious questions to be asked about their value for money, and justification in either economic or environmental terms against the background of a likely requirement for significant public spending cuts as the new Rishi Sunak-led Government seeks to get to grips with the national finances.