Getting It Wrong: The AI Labor Displacement Error (Part 1)

INTRODUCTION

In a recent interview the CEO of OpenAI, Sam Altman, stated that his major surprise regarding the adoption of Artificial Intelligence is that it has displaced creative labor before intellectual and physical work. This was not the first time that Sam and many other luminaries in the field have commented on the incongruity between the realities of AI labor displacement and the long-standing, erroneous and universally held assumption that physical labor would be replaced by AI first, followed by intellectual work and then creative tasks last.

Of course, we have seen that the opposite has been true in practice. Artistic labor, such as writing and digital image creation, is being replaced first by AI platforms such as ChatGPT, Claude, DALL-E and Midjourney. Many white-collar intellectual jobs, including computer programming, are soon to follow. While much simple, repetitive physical labor has been replaced by mechanical automation, few jobs requiring complex physical intelligence have been threatened so far by Artificial Intelligence.

Most informed observers expect that AI technology will radically transform the world in fairly short order, so a miss of this magnitude in a high impact area like the labor market is a very big deal. Yet the “AI Labor Displacement Error” has not been the subject of rigorous inquiry. It’s not even a subject of discussion.

Currently, issues concerning AI are framed in terms of AI “Safety” or “Alignment” or “The Control Problem”. This framing directs our attention to the technical issues of how to engineer behavior into a model to produce outputs that fall within an acceptable parameter range. But the discussion of AI engineering problems is taking place in a vacuum, absent an understanding of the very nature of what is being engineered.

As we plow ahead with creating the most powerful and world-changing technology in history, there is currently no general theory of Intelligence. Here we are at the dawn of the Age of Intelligence, and yet Intelligence as a general principle remains largely undefined. Disparate, conflicting definitions of the concept of Intelligence dot the intellectual landscape. It is quite common to hear top thinkers throwing out a widely dispersed smattering of definitions of the nature of Intelligence while simultaneously hand-waving the problem away as “unknowable”.

But the answer is far from unknowable. In fact, all the knowledge as well as large portions of the theoretical basis necessary to rigorously define and understand Intelligence as a general principle has been available for quite some time.

In my view this is an untenable situation. A rigorous theory of Intelligence as a general principle is urgently needed. A transdisciplinary effort to gather and fit all the puzzle pieces together is required.

Failing to come to grips with the nature of Intelligence has already produced one massive miscalculation that has negatively impacted millions of workers in the creative arts. It’s valid and important to ask: what other serious mistakes can be made and how costly could they be? Could the lack of a solid working theory of Intelligence even pose an existential risk?

The failure to accurately and thoroughly define the general nature of Intelligence in both humans and artificial neural networks could very well lead to additional and far more catastrophic errors and misjudgments. It’s a matter of the highest urgency that needs to be addressed now.

Thanks for reading Part 1 of “Getting It Wrong: The AI Labor Displacement Error and The Nature of Intelligence”.

Part 2, “The Nature of Intelligence”, is coming soon!

Please like, share and subscribe.

Follow The Singularity Project on X:

@01Singularity01

Previous
Previous

Getting It Wrong: The AI Labor Displacement Error (Part 2)—The Nature of Intelligence

Next
Next

The Age of Intelligence