Generative AI will upend the professions

0
47
Generative AI will upend the professions

Writer is the author of “The Future of Professions”

ChatGPT opens a new chapter in the AI ​​story we’ve been researching for over a decade. Our research focuses on the impact of AI on professional work, looking at the technology in eight fields, including medicine, law, teaching and accounting.

In general, the narrative in our book, career future, Always optimistic. At a time when professional advice is too expensive, and our health, justice, education and auditing systems routinely fail us, AI offers the promise of easier access to the best expertise. Some professionals understandably find this threatening, as the latest generative AI systems are already outperforming human professionals in certain tasks — from writing efficient code to drafting compelling documents.

Contrary to many who have predicted that AI will be “narrow” for years, the latest systems have a much wider scope than previous systems, happily diagnosing disease while designing beautiful buildings or creating lesson plans.

They focus on refuting the idea that AI systems must “think” in order to perform tasks that require “creativity” or “judgment”—a common line of defense among the old school. High-performance systems don’t need to “reason” the law like a lawyer to make solid contracts, or “understand” anatomy like a doctor to provide useful medical advice.

How did professionals react? Our original research and more recent work suggest a familiar pattern of responses. Architects tend to be open to new possibilities. Auditors are running for cover as the threats to their data-driven activities are obvious. Physicians can be dismissive of non-physicians, and management consultants are more willing to advise on transformation than change themselves.

Still, business leaders seem less dismissive of generative AI than they used to be.

Some are interested in how these technologies can be used to streamline existing operations: A recent study by MIT researchers found that ChatGPT increased the efficiency of white-collar writing tasks, such as composing sensitive company-wide emails or punchy news draft, an increase of almost 40%. Others are focused on simply laying off staff: US online learning company Domestika, for example, reportedly laid off nearly half of its Spanish staff, hoping that those working on content translation and marketing materials could be replaced by ChatGPT.

While such layoffs may seem hasty, Goldman Sachs research predicts that as many as 300 million full-time jobs worldwide could be threatened by automation. Yet few professionals accept that AI will take on their most complex jobs. They go on to imagine that AI systems will be limited to their “routine” activities, the immediate, repetitive parts of their jobs—document review, administrative tasks, the daily drudgery. But when it comes to complex activities, many professionals believe that one must always want the in-person attention of an expert.

Every element of this claim is open to challenge. The capabilities of GPT have gone far beyond “regular”. As for personal attention, we can learn from taxes.

Few people who file their tax returns using an online tool instead of a human expert will regret the loss of social interaction with their tax advisor.

Claiming that clients need expert, trusted advisors is to confuse process and result. Patients don’t want doctors, they want health. Clients don’t need litigators, they need to avoid pitfalls in the first place. People need solutions they can trust, whether they rely on flesh-and-blood professionals or artificial intelligence.

This leads to wider problems. How are existing professionals adapting and what are we developing young professionals to be? The worry is that we are producing 20th century artisans whose knowledge will soon become redundant. Workers of today and tomorrow should acquire the skills needed to build and run systems that will replace their old ways of working—knowledge engineering, data science, design thinking, and risk management.

Some see teaching people to code as an urgent priority. But it’s an already impressive activity for AI systems — AlphaCode, developed by DeepMind, outperformed nearly half of the entrants in major coding competitions. Instead, we should be aware of the emergence of new and unfamiliar roles, such as the most important hint optimizers — currently, they are best at guiding and ensuring the best responses for generating AI systems.

There are certainly risks with the latest AI. A recent technical paper on GPT4 acknowledged that the system can “amplify bias and perpetuate stereotypes.” They can “hallucinate”. They can also be completely wrong and raise the specter of technological unemployment. Thus sparking a frenzy of ethical and regulatory debate. However, at a certain stage, as its performance improves, the benefits become indisputable, and the threats and drawbacks tend to be outweighed by the improved access AI provides.

Careers are unprepared. Many firms remain focused on selling employee time, and their growth strategies are premised on building larger contingents of traditional lawyers, auditors, tax advisors, architects and others.

Huge opportunities certainly lie elsewhere—especially in actively participating in the development of generative AI applications for customers.

LEAVE A REPLY

Please enter your comment!
Please enter your name here