TAPAS.network | 24 April 2023 | Editorial Opinion | Peter Stonham

Are we smart enough to deal with the implications of AI?

Peter Stonham

STONE AGE MAN, if handed a smart phone, might be bemused, intrigued – and probably concerned – but it is unlikely he would immediately say how useful it was, and how it was going to change his life. The functionality of the device would hardly match the priorities of his era – after all, it cannot hunt, cut trees down or light a fire.

Fast forward to today, and there seems to be an assumption that technology – in particular ‘intelligent technology’ – can and should take over all manner of tasks that we believe to be essential to modern life, but that take up time and effort that as individuals we could better deploy elsewhere, and that organisations want robots to perform much more cheaply.

Since the Industrial Revolution, things that were once accepted as ‘manual work’ have progressively been replaced with machines doing the job for us. With the advent of the digital world we have since entered into, the question now is how far that process of replacement should reasonably and sensibly go, when some of the functions to be passed into new non-human hands are substituting not just for physical activities, but for thinking processes and decision making too.

Conscious of this significant challenge, the Government has just published a White Paper outlining its Artificial Intelligence (AI) strategy.

Understanding the relationship of rapidly evolving new technology with long-standing human activity is probably at the core of the debate that should be now happening about the role of AI. In its latest incarnation, advanced computing, programming and processing power, offers the prospects of not only taking over tasks with which we are familiar, but of deciding what things are important, how they might best be organised and undertaken, and by doing so, therefore complementing, challenging or even displacing the structure of human’s individual or collective thinking and activity established over thousands of years.

The fire-power of this new ability is awesome. Moore’s Law told us that computing power as measured by hardware and processing capacity doubles every two years. But there is no clear law about how we will respond by using that capability in terms of the systems design and applications to take over tasks and functions that were previously manual - coupled with the continuing invention of new things that we can put our ‘Artificial Intelligence’ machines to work in doing, including learning, in moments, what it has taken humans millennia to understand and master.

Considering the points outlined above must surely be seen as a critical part of the conversation that seems most necessary about where AI can, and maybe will – whether we like it or not – take over functions that to date have required human execution, or at least supervision. For transport, these include providing the system itself advising or instructing humans how they should make their choices, and even taking away the need for people to travel personally or drive vehicles to achieve objectives.

The new AI -powered machines that are in prospect are not limited to just undertaking complex and sensitive tasks, but able to offer themselves to look after things that humans have traditionally expected to do for themselves – from cleaning, cooking and driving to problem-solving and decision-making.

As well as changing the whole landscape of society, a great number of such tasks, previously occupying huge numbers of humans, will effectively be handed over to super-intelligent machines, thereby taking away both sources of income and life-defining jobs and activities.

So far, the conversation about AI has manly seemed to focus on the direct substitution of machine activities for equivalent human ones, but it is unlikely that the unleashed power of intelligent man-made life will stop at simply acting as a functional man-like robot with a swivelling head and mechanical limbs.

There is as yet, little apparent discussion of the downstream role of artificial intelligence beyond playing Man-Friday or Jeeves, and answering their master’s every need. At a societal level, this significant resource will be at the disposal of governments, corporations and military forces - and errant individuals - whose decisions on how to deploy it are already raising important questions, though without a context for a wider community discussion about what is acceptable, and how should the risks of things getting out of control be managed.

Are tech companies - and individual inventors- moving too fast in rolling out powerful artificial intelligence technology that could one day outsmart humans?

Last month hundreds of scientists and entrepreneurs working in this field signed a warning letter asking for a pause to further development and application whilst the required conversation took place. The concern was led by a group of prominent computer scientists and other tech industry notables such as Elon Musk and Apple co-founder Steve Wozniak, who are calling for a 6-month pause to consider the risks.

Their petition is a response to San Francisco startup OpenAI’s recent release of GPT-4, a more advanced successor to its widely-used AI chatbot ChatGPT, that helped spark a race among tech giants Microsoft and Google to unveil similar applications. As well as the open letter, the authors have posted a set of policy recommendations for what should happen in the pause they are seeking. Both can be found here.

Nearly a decade ago, similar concerns were also raised in an Open Letter created in January 2015 by Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts addressing the need for more in depth research on the societal impacts of AI. They too recognised that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent potential pitfalls. Whilst artificial intelligence had the potential to eradicate disease and poverty, there was a parallel concern about the creation of something which is unsafe or uncontrollable. The letter was titled “Research Priorities for Robust and Beneficial Artificial Intelligence”, with detailed research priorities set out in an accompanying twelve-page document.

It hasn’t seemed that either those 2015 concerns, or the new ones just expressed, have led to much change in the trajectory of development of the AI field, or the governmental or societal frameworks for it.

Concerns heard so far at the spread of this new world of AI-driven daily individual consumer activity have mostly focussed on the potential marginalisation of digitally dispossessed people, or those who behaviours don’t fit in to prescribed algorithms, and become regarded as awkward outlyers that the system needs to knock back into shape. But the impacts could soon be far wider.

In the transport field, there are huge areas of current activity that are ripe for being reshaped by this multiplying level of AI capability.

A few obvious examples include: automated driving and vehicle control for road, rail and air; system controlled warehousing and logistics delivery and distribution; management of personal travel planning and optimal decision making, and maybe behaviour monitoring; recommending individual travel patterns for their economic and sustainability criteria; managing our digital transactions and personal expenditure accounts for travel; and (some may fear) specifying where and how people are authorised to move about – and the criteria that are used to say yes or no to what individuals would like to do.

As a society we won’t necessarily be able to pick and choose between ‘the good things’, that carefully applied AI can help us with, and the bad things that questionable people with access to AI get up to, or those that the machines begin to either learn they are able to do themselves, or may not have been properly designed to be only deployed under careful instruction. We are used to the implications of ‘rogue’ human behaviour, but seem as yet to be in denial that there could ever be rogue machine behaviour, or ill-intentioned instruction to the machines by devious forces of what they should be allowed to do.

If we are to avoid a future we do not want, the time to specify the necessary framework is now, before the genie is out of the bottle and we are not desperately trying to put it back in. Or our dependency on these new systems has become irreversible.

Even now, in embracing the delights of conversations with ChatGPT and the like, we are handing over many insights about ourselves by the very act of asking the questions of these third parties. And if we want these clever robots to do some work on our systems development and coding, we have to first allow them to look at the IP that we have so far created, perhaps hoping they then forget about it, rather than build it into their own (and their owners) next outputs.

Let’s not overlook the fact that these new fast-learning machines have their own creators and masters and are not by definition munificent benign beings. And that’s before they truly develop minds of their own…

Peter Stonham is the Editorial Director of TAPAS Network

This article was first published in LTT magazine, LTT867, 24 April 2023.

20230206
taster
Read more articles by Peter Stonham
Driverless bandwagon moving forward hastily - but where are we aiming to get to?
FOR THE PAST couple of centuries transport has been characterised by the regular emergence of new mechanised transport options, and the related questions of how these technologies are tested for safety and other consequences, and are supervised and regulated.
Autonomous Robo-Taxis in London — who’s at the wheel, and where are we going?
WAYMO’S PLAN for autonomous vehicles in London sets a challenge for policy makers, regulators, transport planners and system managers alike. It offers - in theory at least- the benefits of potential safety improvements, increased mobility options, and new economic activity through technological innovation and job creation. But it also faces concerns related to public trust, complex urban traffic navigation, efficient roadspace management, transport network planning, competition with the taxi industry and other modes, and ensuring the technology is safe and beneficial for all road users.
Lifestyle choice, not modal choice, needed to really tackle transport’s climate problem
ADDRESSING TRANSPORT'S climate change implications and very substantial carbon deficit is generally acknowledged to be something that must involve more than changing the fuels in the vehicles we use, as the Committee on Climate Change again emphasises in its critical new report to Government. It is the latest entity to be alarmed at the rate of progress towards Net Zero, and the apparent lack of urgency on that mission within Government, and recognition of its full required extent.
Read more articles on TAPAS
A Transport Convergence: The right policies will favour all of safety, environment, and economy
We may have reached a point where the different key criteria that have driven transport scheme justification are becoming mutually reinforcing feels Phil Goodwin. He suggests that policy should recognise that the three greatest priorities require overlapping , and in some cases identical, policy measures – which together imply a shift of resource allocation away from roads.
Working from home - a changing pattern
The Pandemic has brought about some significant changes to the ways people work, commuting flows and demand for transport capacity. But the situation varies by region, type of work, status level and gender says John Siraut, as he looks into some of the detailed data.
It’s people driving policy
THE ANNOUNCEMENT of a tough new approach to road investment represents another step in the radical changes that the Welsh Government is making to its transport policies, and in setting a direction which many hope will be taken up elsewhere around the United Kingdom.