Insights

Secret Cyborgs in the Workplace: AI Tools and the New Labor Landscape

Written by Logical Design Solutions | 9/20/23 8:18 PM

On July 18 of this year, OpenAI filed a trademark for GPT 5, a release that will be eagerly awaited because of anticipated power gains, more factual precision, multimodal capabilities, and advancements toward Artificial General Intelligence (AGI) - aka the ability to master reason by analogy and solve problems like humans.

In his hallmark book Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, Douglas Hofstadter argued that “the ability to make analogies is a fundamental aspect of human thought, but it is not clear that AI systems can replicate this ability.” Similarly, John McCarthy, the proclaimed ‘Father of AI’ stated fifteen years ago that "AI systems do not really understand analogies."

Based on these testaments, today’s highly educated and well-paid knowledge workers can all heave a collective sigh of relief – or can they?

We already know that GPT 3 & 4 are large language models that are continuously updated based, among other things, on its consideration of relative frequency among billions of human user questions and statements. Although OpenAI represents that it does not retain information provided in conversations, it does “learn” from every conversation, and the company doesn't announce when or what changes are being made to its data sources. Maybe it should come as no surprise that a team from the University of California announced in late July that LLMs like GPT 3 can now master reasoning by analogy – to some the Holy Grail of AGI.

At the same time, in some quarters, ChatGPT is being renamed ‘CheatGPT,’ due to the increasing number of employees using it to facilitate their everyday tasks. Most workers aren't telling their companies about their online forays. Instead, as one observer put it, “They've become secret cyborgs - machine-augmented humans who keep themselves hidden.” While decision-makers mull policy on Generative AI, employees are happily deploying it to get ahead on their jobs and even knock off work early.

In a recent interview conducted by McKinsey, Ethan Mollick, an Associate Professor of the Wharton School at the University of Pennsylvania, had this to say:

People are secretly using AI around you all the time. I cannot emphasize how much secret AI use is happening in places you don’t expect. People come up to me after talks all the time, people you wouldn’t expect, people in charge of writing policy, and they’re using AI to do stuff because once you start using it, you’re like, “Why do I want to handwrite a document again?” It feels like you’re going from word processing to handwriting. Why would you do that? I know plenty of people at companies where AI is banned who just bring their phones and do all their work on AI and then email it to themselves because why would you not do that?

Does this behavior have any precedent, and if so, what are the consequences? GPT 5 may someday tell us that arcane and short-lived human practices in the early 2020s were analogous to 1980s bank tellers advising customers to use the ATM, travel agents referring clients to online booking facilities, or data entry clerks recommending the use of automation software.

The underlying question in every case, for every employer, becomes ‘If the technology is genuinely better than a person at the role, why would I employ people? Or ‘Why do I need six humans if one person can accomplish the same result in the same time frame simply by using AI-generated solutions?’

Perhaps those who are secretly reaping the preliminary benefits of clandestine AI usage today to enjoy more time off or gain productivity points over their peers will become labeled as the dinosaurs, leeches, hotshots, and deadheads of tomorrow. We should heed the prediction that in extreme cases, the knowledge worker of the future may have to hold as many as seven or eight jobs, with the average person working for several companies simultaneously rather than working for one big corporation.

Last month, investment bank Goldman Sachs analysts warned that “significant disruption” was on the way for the labor market, with an estimated two-thirds of jobs able to be automated to at least some degree, while the New York Post reported that even as the rapid rise in artificial intelligence tools threatens to wipe out millions of jobs around the world, a small number of the so-called “over-employed” are working remotely while exploiting tools like ChatGPT to secretly work multiple full-time jobs. While some may argue that using AI tools to hold down multiple jobs in this way is unscrupulous, others see it as a way to not only cope with the rising cost of living but ultimately to demonstrate the type of work ethic that employers will continue to seek in the new world order.

In a reprieve for traditionalists, a recent survey conducted by the research firm Gartner found that 14% of companies had issued a ban on the use of chatbots like GPT, primarily because they fear AI platforms might gain access to sensitive customer information, which businesses are legally obliged to protect. Others worry that employees will inadvertently divulge trade secrets in their prompts, or rely on error-prone responses from a chatbot without checking the machine's work.

The growth of GPT and other generative AI tools will continue to accelerate, so the importance of addressing the associated risks becomes increasingly critical. No matter how proactive organizations are in grasping these risks and implementing strategies to mitigate them, it appears that the ranks of secret cyborgs are set to grow exponentially in the short term, even as the tools they are using threaten to replace themselves and possibly hundreds of millions of other jobs, in a cataclysmic reorganization of the workforce known as the fourth industrial revolution.