What about the jobs ChatGPT could create?

Feb 20, 2023

7 mins

What about the jobs ChatGPT could create?
Rozena Crossman

Journalist and translator based in Paris, France.

It’s just months since ChatGPT was released but already the economic prophets are warning that the labor apocalypse is nigh. This impressive chatbot is slated to replace everyone from paralegals to programmers as it creates AI-generated content covering everything from coding to copywriting. Yet we can’t be sure whether any roles will disappear or even how they will be affected. So shouldn’t we also consider what jobs this evolving technology might produce?

Concerns that ChatGPT may put some people out of work are valid. The bot really is smarter and meaner than its predecessors. Its ability to write a long, heartfelt letter from a one-line prompt is light years beyond the AI that predicts how you’d like to sign your email. ChatGPT can imitate almost every English writing style with astounding precision – making it easy to forget you’re conversing with a machine – and it draws on one of the biggest datasets in AI history. Its search engine abilities are so powerful that Google declared a “code red,” which was described as “the equivalent of pulling the fire alarm.” Its ability to write, research and solve problems is unprecedented. Jobs that require the very qualities thought to be irreplaceable by automation — customer service reps, creative copywriters and analytical lawyers — have all been mentioned as at risk.

While ChatGPT can do lots of mundane tasks, such as summarizing texts, or directing clients to the correct department, it isn’t able to empathize, to imagine or to reason – yet. It can’t understand the real-world nuances of a tricky client support case and it lacks the critical thinking needed to cross-examine a witness. When ChatGPT made waves for passing a bar exam, AI engineer Dr Heidy Khlaaf tweeted that the news wasn’t particularly impressive because, given all the data the bot has access to, it was like taking an open-book test.

Getting it wrong

Whatsmore, ChatGPT often generates incorrect and biased results. The bot’s own website admits that it “sometimes writes plausible-sounding but incorrect or nonsensical answers,” a phenomenon known as hallucination by AI engineers. Imagine a customer service bot giving wrong information in a tone so confident and articulate that the client thinks they’re dealing with a human. That could harm the business. This hallucination occurs largely because so much of ChatGPT’s data comes from the internet, which is rife with misinformation, disinformation — and prejudice. Cue the uncomfortable, viral example of ChatGPT telling one user that a “good” scientist was white.

There are large gaps in its knowledge too. AI researcher Osman Ramadan says, “I’m originally from Sudan, and if I ask ChatGPT about [that country], it doesn’t give me as much information as when I ask about the prime minister in the UK — then it’s much more informative. With Sudan, it makes a lot of stuff up, mixing the current president with the last president, and says stuff that just doesn’t make sense.”

Ramadan specializes in natural language processing and generative modeling, the two most important machine-learning techniques behind ChatGPT. He was a research engineer for Microsoft — working in its Task and Intelligence team as well as the Search and Recommendation group at Microsoft Search, Assistant and Intelligence (MSAI) — before moving to the autonomous e-commerce platform Shift Lab, and is currently working on his own venture in the AI space. His extensive involvement with AI has taught him that companies can’t rely solely on a bot, which is liable to produce erroneous and prejudiced content. Instead, he thinks ChatGPT will be used as a tool to boost productivity rather than replace professionals. Ramadan outsources some of his code-writing tasks to AI software, for example, and then checks whether the code is correct. This gives him more time to work on assignments that require innovation and critical thinking – skills that are currently AI-proof and highly valued in the labor market.

Digital transformation

But the idea of delegating to a bot rather than a coworker is exactly what’s keeping many up at night. All that time saved by AI could mean fewer tasks for homo sapiens, resulting in employee numbers being cut. Jacqueline Rowe, who works as policy lead at Global Partners Digital, a digital rights NGO, suspects this is why the conversation has revolved around job loss rather than gain. “It has to do with the types of jobs that are being lost and the types of jobs that are being created,” she says. “On one level, there’s a lot of new potential for jobs like software engineer or academic researcher. But these are jobs for people who are highly educated, and probably already on quite a high income. That’s very different to someone being paid minimum wage to answer phone calls for customer service or respond to emails.” So far, Rowe has seen governments pour much more money towards ramping up AI technology than preparing their workforce for an impending digital transformation.

What keeps AI advancing, however, is human effort. And there are a lot of governments and tycoons with deep pockets heavily invested in the continuation of this lucrative field, such as Bill Gates and Elon Musk — both of whom have been involved with ChatGPT creator OpenAI. In the same way the internet created careers ranging from website developers to bloggers, the tech powering ChatGPT could easily become its own economic sector complete with tasks for a variety of workers.

AI inventor Ronjon Nag says, “I think it will be like the industrial revolution, which generated more factory workers. Then automation happened and we needed fewer factory workers, but then we ended up with more factories and more products, so there were still loads of jobs for factory workers.” As founder and president of the R42 Group, a venture capital fund for AI, biotechnology and wireless communication tech, Nag has been working with artificial intelligence for more than 30 years, from creating the first laptop with built-in speech recognition in 1984 to teaching classes on AI and genetics at Stanford University today. He is optimistic about ChatGPT. “I like to think these tools will give more opportunities to more people. The pie will get bigger for everybody, rather than the dystopian scenario,” he says.

How could ChatGPT generate these new employment possibilities? Let’s examine some of the potential jobs that may — or may not — expand the labor market pie.

Data labeling

Not all of the prospective roles will be lucrative, and some may even be harmful to employees’ mental health. Soon after ChatGPT was released, Time reported that OpenAI had paid data-labellers in Kenya less than $2 per hour to review violent, prejudiced and sexually-explicit content in order to train the bot to be less toxic. Although OpenAI discontinued the program, data labeling remains a crucial process in improving ChatGPT — or any AI based on natural language processing [NLP] — and keeping users safe from the bot spewing harmful content.

While data labeling is necessary, it doesn’t have to be traumatizing. Data labelers simply train AI to understand its dataset by categorizing the raw data: they might indicate that a photo shows a tree, or that an audio file contains certain words. The more mundane labeling tasks could be an opportunity for workers to quickly find jobs within the AI sector, while more nuanced labeling could provide experts in certain fields with a gainful living. “Let’s say you’re analyzing music tracks, and you need to label if a track actually corresponds to a certain movement. This kind of domain expertise will pay a lot more, and make these people expert consultants,” explains Ramadan.

But Rowe is not convinced that data labeling will become a big field of employment. “Part of the direction that NLP research has taken in the last five to 10 years is precisely to reduce the dependance on that kind of high-quality data labeling because it’s expensive,” she says, but notes there has been a recent, concerted push by AI researchers such as Timnit Gebru to prioritize rigorous data labeling.

Prompt engineering

Those already working with ChatGPT on a daily basis know the way they word their prompts changes the quality of the bot’s response. For example, asking the bot “What jobs will you create?” and “What jobs will be created by the technology behind ChatGPT?” result in two very different replies.

It turns out that knowing how to properly communicate with ChatGPT is a valued skill set known as “prompt engineering.” These engineers understand the chatbot’s inner workings to the point where they can phrase prompts in specific, detailed ways to obtain a desired result. Platforms such as PromptBase have already sprung up, allowing prompt engineers to monetize their skills by selling their prompts to the public.

Ethics, policy and auditing

Between OpenAI’s problematic data labeling practices and one of its founders admitting that ChatGPT has a bias problem, there’s plenty of work to be done in the ethics department. “Responsible AI is becoming one of the biggest tracks in AI research,” says Ramadan. “There are actually titles appearing like ‘responsible AI researcher,’ and there are companies now that are mainly focused on how we remove bias from these models.”

If small and medium-sized businesses adopt ChatGPT en masse, will it spur more companies to hire consultants on AI ethics and bias? “I’m a little bit skeptical, because usually there needs to be some sort of major public scandal before companies say, ‘Maybe we should hire someone to make sure this doesn’t happen again,’” says Rowe.

A landmark law mandating bias audits is scheduled to pass this year in New York City. While the proposed legislation relates only to the AI used in hiring and employment decisions, it could inspire similar regulation for bots like ChatGPT — and propel a demand for AI bias auditors.

Machine-learning engineers

The more ubiquitous generative AI tools like ChatGPT become, the more we’ll need AI engineers. Ramadan believes they’ll be as prevalent as IT admins eventually. “Even if you have a barber shop, you might need an IT admin to help you with your website,” says Ramadan. He imagines a future where generative text programs like ChatGPT are combined with image-generating tools such as OpenAI’s Dall-E. “Let’s say the barber wants to visualize what a haircut would look like on a customer based on the customer’s description of what they want. They would need someone to maintain this technology, which would need to be continually updated,” he says.

Ramadan says that troubleshooting these open-access AI models may not require a university degree or a big network. “In IT, you don’t need to know electromagnetics — this is going to be similar,” he says. “You don’t need to know the specifics of how the AI was trained, you’d just need to know how to maintain it and collect the data.”

Tools like ChatGPT and Dall-E could democratize many industries. “People with less training can get into a job field they otherwise wouldn’t be able to get into,” Nag says. “It’s a bit like computer science. When I first started, you had to have a PhD plus five years’ [experience] to get into the field. Now there are all kinds of jobs in AI. There are people who manage the data, people who retrain it, people who take care of the computer systems, product managers who figure out what to use the AI for… The question is: will they be paid as much?” And no matter how you phrase that prompt, only humans can provide the answer.

Photo: Welcome to the Jungle

Follow Welcome to the Jungle on Facebook, LinkedIn, and Instagram, and subscribe to our newsletter to get our latest articles every day!

Topics discussed