TAPAS.network | 22 January 2025 | Editorial Opinion | Peter Stonham
TECHNOLOGICAL DEVELOPMENTS have been coming thick and fast over the past few decades — the period of what might be called the digital communications revolution. No one had a map of where society was going when the first chip-powered computing became generally available and brought the possibilities of huge data management and information transfers, that had previously seemed impossible.
The rate of growth in processing power and of the distribution systems has been phenomenal. Moore’s Law is well known for saying that the number of transistors in an integrated circuit doubles about every two years, and hence computing power multiplies exponentially. Consequent advancements in digital electronics, and their reducing cost, have meant increase in memory capacity, the improvement of sensors, transmission systems and the enormous fire-power of data centres and the cloud. These ongoing changes in digital electronics have been a driving force of technological and social change, productivity, inventive possibility, and economic growth.
Key features are an almost infinite ability to review existing information very quickly, a huge repertoire of analytic algorithms, and the opportunity to act way beyond human mental and manual capability - but to predetermined human- specified objectives.
At some point soon, however, it became clear that all this computational power was heading way beyond the ‘thinking’ and ‘reasoning’ capacity of the human mind — not only a single mind, but even the combination of many many minds. The term then coined was ‘Artificial Intelligence’ — a useful catch-all phrase for something beyond ourselves,, but in some ways misleading, and certainly subject to different definitions and interpretations. And worse, easily picked up on within media, the popular culture, and politics, as an idea easy to hype, fear, or exploit without a generally agreed clear meaning, now reduced to just two initials.
The confusion and imprecision has made debate about the implications of this paradigm shift in the possibilities of deployment of a new technological force an often unproductive and unstable area of public policy: a new Wild West. No more so than at the present moment when the Government has seemingly moved with break-neck desperation,rather than measured consideration, to ride this wave as the route to a magical new era of boundless economic prosperity.
Of course, there are some potential remarkable achievements on the way by the deployment of this ‘AI’ — sometimes more properly simply called automation, machine learning, super-speed data interrogation, pattern recognition, character and image reading and generation, statistical probabilistic analysis, incident diagnosis and other examples of what can be achieved by the massive capacity of current processing power and information aggregation. It has the potential to solve intractable problems in medicine, mathematics, astrophysics, chemistry, life sciences,and way beyond.
But the ultimate AI goes a lot further than just helping humans solve problems and improve processes.
There is also, unarguably, a prospect of synthetic neural networks and other enhanced interpretive and decision-making capability being unleashed, sometimes using human tissue, and allowing ‘machines’ to become virtual equivalents of humans, and in due course, more capable than, or fused with, them.
This might indeed be the area properly reserved for the name ‘Artificial Intelligence’. It is so significant for the future of mankind that a full discussion of how these limitless possibilities are developed and deployed must surely take place, consider and, if necessary, seek to control, potential unwelcome outcomes. Regrettably, this unknown and unmapped territory seems in danger of being lost in general conversation about the ill-defined ‘AI,’ that now ranges from the monitoring of road conditions, particularly efficient internet search, and smart data handling systems, to prospective replacement of a huge range of industrial and administrative roles, and the idea of an ultimate ‘singularity’ between homo sapiens and the manufactured minds and bodies of digitally -driven robots.
Put bluntly, us humans need urgently to establish a better common understanding of what we are talking about with AI. At both the high-level policy discussion, and in terms of managing, in detail, the man-machine interface, and eventual fusion, that will before long irrevocably alter both our daily and lifetime experiences - and maybe existence.
The human brain has a unique ability to think outside the box, to imagine, conceptualise , create, explore and debate. These are the kinds of capability we need to deploy now in a world where systems engineering and the comfort of shared opinions and slogans , instant gratification and distraction can easily lead to a narrowing of possibilities, rather than a broadening of them. It is true that AI will allow us to review an enormous number of ways forward very quickly, but for now, it is still humans posing the problems, asking the questions, and choosing and deciding on the best options.
Machines are brilliant at doing mindless or repetitive things much better than humans, and now learning how to respond to predicted situations , but we do not yet have a set of principles to determine where they should be expected to do mindful or unique things, make decisions for us - and about us.
The ultimate outcome that the keenest AI proponents are looking to achieve is that ‘Singularity,’- the point at which we don’t perceive the difference between the minds of the robots and of ourselves, and moreover, when there will be effectively a hard-wired interaction and/or integration of the AI mind and the human mind. A number of these experts are confidently expecting that this singularity will be with us within the next 20 years, and work is proceeding apace on that journey every day.
To some, that is an exciting and transcendent prospect. To others a nightmare dystopia. It would be nice to think we had a choice whether to go there or not, but that decision might not be so easy once the dynamic of the trajectory has been locked in, as some, including it seems our Prime Minister , appear so anxious and enthusiastic to do. There is hopefully a discussion still to be had, but it would seem very sensible to have it soon. And if we are to have it usefully , to all be sure what exactly we are talking about in terms of the supposed limitless benefits of AI, and where they may irreversibly lead.
Peter Stonham is the Editorial Director of TAPAS Network
This article was first published in LTT magazine, LTT907, 22 January 2025.
You are currently viewing this page as TAPAS Taster user.
To read and make comments on this article you need to register for free as TAPAS Select user and log in.
Log in