TAPAS.network | 24 April 2023 | Editorial Opinion | Peter Stonham

Are we smart enough to deal with the implications of AI?

Peter Stonham

STONE AGE MAN, if handed a smart phone, might be bemused, intrigued – and probably concerned – but it is unlikely he would immediately say how useful it was, and how it was going to change his life. The functionality of the device would hardly match the priorities of his era – after all, it cannot hunt, cut trees down or light a fire.

Fast forward to today, and there seems to be an assumption that technology – in particular ‘intelligent technology’ – can and should take over all manner of tasks that we believe to be essential to modern life, but that take up time and effort that as individuals we could better deploy elsewhere, and that organisations want robots to perform much more cheaply.

Since the Industrial Revolution, things that were once accepted as ‘manual work’ have progressively been replaced with machines doing the job for us. With the advent of the digital world we have since entered into, the question now is how far that process of replacement should reasonably and sensibly go, when some of the functions to be passed into new non-human hands are substituting not just for physical activities, but for thinking processes and decision making too.

Conscious of this significant challenge, the Government has just published a White Paper outlining its Artificial Intelligence (AI) strategy.

Understanding the relationship of rapidly evolving new technology with long-standing human activity is probably at the core of the debate that should be now happening about the role of AI. In its latest incarnation, advanced computing, programming and processing power, offers the prospects of not only taking over tasks with which we are familiar, but of deciding what things are important, how they might best be organised and undertaken, and by doing so, therefore complementing, challenging or even displacing the structure of human’s individual or collective thinking and activity established over thousands of years.

The fire-power of this new ability is awesome. Moore’s Law told us that computing power as measured by hardware and processing capacity doubles every two years. But there is no clear law about how we will respond by using that capability in terms of the systems design and applications to take over tasks and functions that were previously manual - coupled with the continuing invention of new things that we can put our ‘Artificial Intelligence’ machines to work in doing, including learning, in moments, what it has taken humans millennia to understand and master.

Considering the points outlined above must surely be seen as a critical part of the conversation that seems most necessary about where AI can, and maybe will – whether we like it or not – take over functions that to date have required human execution, or at least supervision. For transport, these include providing the system itself advising or instructing humans how they should make their choices, and even taking away the need for people to travel personally or drive vehicles to achieve objectives.

The new AI -powered machines that are in prospect are not limited to just undertaking complex and sensitive tasks, but able to offer themselves to look after things that humans have traditionally expected to do for themselves – from cleaning, cooking and driving to problem-solving and decision-making.

As well as changing the whole landscape of society, a great number of such tasks, previously occupying huge numbers of humans, will effectively be handed over to super-intelligent machines, thereby taking away both sources of income and life-defining jobs and activities.

So far, the conversation about AI has manly seemed to focus on the direct substitution of machine activities for equivalent human ones, but it is unlikely that the unleashed power of intelligent man-made life will stop at simply acting as a functional man-like robot with a swivelling head and mechanical limbs.

There is as yet, little apparent discussion of the downstream role of artificial intelligence beyond playing Man-Friday or Jeeves, and answering their master’s every need. At a societal level, this significant resource will be at the disposal of governments, corporations and military forces - and errant individuals - whose decisions on how to deploy it are already raising important questions, though without a context for a wider community discussion about what is acceptable, and how should the risks of things getting out of control be managed.

Are tech companies - and individual inventors- moving too fast in rolling out powerful artificial intelligence technology that could one day outsmart humans?

Last month hundreds of scientists and entrepreneurs working in this field signed a warning letter asking for a pause to further development and application whilst the required conversation took place. The concern was led by a group of prominent computer scientists and other tech industry notables such as Elon Musk and Apple co-founder Steve Wozniak, who are calling for a 6-month pause to consider the risks.

Their petition is a response to San Francisco startup OpenAI’s recent release of GPT-4, a more advanced successor to its widely-used AI chatbot ChatGPT, that helped spark a race among tech giants Microsoft and Google to unveil similar applications. As well as the open letter, the authors have posted a set of policy recommendations for what should happen in the pause they are seeking. Both can be found here.

Nearly a decade ago, similar concerns were also raised in an Open Letter created in January 2015 by Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts addressing the need for more in depth research on the societal impacts of AI. They too recognised that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent potential pitfalls. Whilst artificial intelligence had the potential to eradicate disease and poverty, there was a parallel concern about the creation of something which is unsafe or uncontrollable. The letter was titled “Research Priorities for Robust and Beneficial Artificial Intelligence”, with detailed research priorities set out in an accompanying twelve-page document.

It hasn’t seemed that either those 2015 concerns, or the new ones just expressed, have led to much change in the trajectory of development of the AI field, or the governmental or societal frameworks for it.

Concerns heard so far at the spread of this new world of AI-driven daily individual consumer activity have mostly focussed on the potential marginalisation of digitally dispossessed people, or those who behaviours don’t fit in to prescribed algorithms, and become regarded as awkward outlyers that the system needs to knock back into shape. But the impacts could soon be far wider.

In the transport field, there are huge areas of current activity that are ripe for being reshaped by this multiplying level of AI capability.

A few obvious examples include: automated driving and vehicle control for road, rail and air; system controlled warehousing and logistics delivery and distribution; management of personal travel planning and optimal decision making, and maybe behaviour monitoring; recommending individual travel patterns for their economic and sustainability criteria; managing our digital transactions and personal expenditure accounts for travel; and (some may fear) specifying where and how people are authorised to move about – and the criteria that are used to say yes or no to what individuals would like to do.

As a society we won’t necessarily be able to pick and choose between ‘the good things’, that carefully applied AI can help us with, and the bad things that questionable people with access to AI get up to, or those that the machines begin to either learn they are able to do themselves, or may not have been properly designed to be only deployed under careful instruction. We are used to the implications of ‘rogue’ human behaviour, but seem as yet to be in denial that there could ever be rogue machine behaviour, or ill-intentioned instruction to the machines by devious forces of what they should be allowed to do.

If we are to avoid a future we do not want, the time to specify the necessary framework is now, before the genie is out of the bottle and we are not desperately trying to put it back in. Or our dependency on these new systems has become irreversible.

Even now, in embracing the delights of conversations with ChatGPT and the like, we are handing over many insights about ourselves by the very act of asking the questions of these third parties. And if we want these clever robots to do some work on our systems development and coding, we have to first allow them to look at the IP that we have so far created, perhaps hoping they then forget about it, rather than build it into their own (and their owners) next outputs.

Let’s not overlook the fact that these new fast-learning machines have their own creators and masters and are not by definition munificent benign beings. And that’s before they truly develop minds of their own…

Peter Stonham is the Editorial Director of TAPAS Network

This article was first published in LTT magazine, LTT867, 24 April 2023.

20230206
taster
Read more articles by Peter Stonham
Putting the car in its place
MARGARET THATCHER was reputed to have once asserted that ‘anyone on a bus over the age of 25 is a failure’. We can’t prove whether the former Prime Minister did or didn’t say this, but it appears that the phrase was originally coined in post war Society circles and picked up and popularised by the Duchess of Westminster in the 1950s. At some point it became common to attribute the statement to Mrs Thatcher after she apparently said something similar in 1986.
Politics in the driving seat
TO SOME PEOPLE, politics should be about expressions of leadership and commitment that construct an appeal amongst the electorate to get behind a vision. For others, it is “the art of the possible”, and to get elected, politicians must first listen closely to the concerns and priorities of the voters, and bring them promises of action that resonate. The next 12-18 months will test the primacy of one or other of these two approaches in the run up to the next General Election.
Forecast: Stormy
PUBLICATION BY DfT of a new set of National Road Traffic Projections — interestingly renamed from the previous ‘Forecasts’ — crystallises a range of issues bubbling away to a prospective boiling point in respect of the horizons that those concerned with transport and mobility should realistically be working to.
Read more articles on TAPAS
A turning point for the UK’s demographics – and transport planning too?
Population growth has been a core consideration in the thinking of transport planners for many years. But the latest data suggests a different demographic landscape may be unfolding, with implications for transport in a number of ways. Barney Stringer looks at the emerging new patterns, and their potential significance for decision-making.
Time for transport people to walk the talk
TWO DAYS AFTER appointing her 15th Prime Minister, the Queen died. This issue of LTT is the first to appear after that very sad event - two weeks in which there have been myriad tributes to her dedication and character, and extensive reflections about the 70 years during which Queen Elizabeth II reigned, and the massive changes that have occurred over that time. 
TRICS and Vision-Led Transport Planning: A New Era of Innovation and Development
For the past 35 years, the TRICS system of trip generation analysis has been part of the Transport Assessment of thousands of new developments through the planning process. Originally mainly concerned with their traffic impacts, the platform has developed and expanded continuously to address emerging new requirements, with examples including its introduction of multi-modal surveys and incorporation of detailed Travel Plan information. In this article, Ian Coles charts the evolution and adaptation of TRICS and outlines the steps taken so far for it to support vision-led Transport Planning, with work on this ongoing.