Why AI experts' jobs are always decades from being automated

This is a guest post by Allen Hoskins. Allen’s Twitter & Substack.


Experts say artificial general intelligence (AGI) will take around forty years to develop and won’t fully automate us for almost a century. At the same time, advancements like ChatGPT and DALLE-2 seem to send a different message. Silicon Valley and leading AI initiatives are anticipating this technology to spark radical change over the next decadeenough to redefine society from the ground up.

So, why do AI researchers continue to predict AGI into the latter half of the century? And should we even take them seriously?

Experts don’t want to admit that AGI could be a decade away because it would make them irrelevant

AI experts' work is spread across the fields of machine learning and computational neuroscience. There are a plethora of research directions being explored. These include reinforcement learning, convolutional neural networks, training image classifiers, neurosymbolic modules, hybrid systems wanting to directly imprint inspirations from the brain… This is still just barely scratching the AI approach surface.

Yet, today’s premier AI systems make little use of other AI researchers' work. The most powerful models in 2023 are still based on Google’s “Transformer” architecture first showcased in 2017. And with this current paradigm, progress mainly stems from solving the engineering challenges of transformers. These include but are not limited to: hardware scaling, data curation, algorithmic tweaks, careful fine-tuning, and adding more compute. Advancements are, by and large, anything that serves to scale these models up bigger – personifying the mantra of “Scale is all you need.”

In contrast, academia values publishing research that is “novel” and “innovative”. Researchers must craft a unique AI portfolio to stand out from the crowd and capture shares from the S&P H-INDEX – hopefully bolstering their odds to secure future grants. Academics must be constantly innovating or to die trying.

Transformers though just want more server racks, more engineers, and more Reddit memes to train on. This doesn’t leave much room for the intellectual freedom and innovation academia demands or for the other AI approaches to substantially contribute.

Instead, we’re now in the middle of an industry speedrun to see if this actually scales all the way to AGI.

If scaling systems like transformers beats every other approach and single-handedly leads to AGI, most researchers will have no academic route to fall back on. In the new AI paradigm, the promising and thought worthy approaches scholars built their career on could become inferior, forgotten, and be of little benefit on the road to AGI. As a result, recent AI trends could be unsettling. Most AI scientists are at risk of losing their sense of purpose, their work feeling meaningless, ignored, and left to wonder how it all happened so fast.

Scientists' belief that more science is critical for AGI means that engineering-based successes don’t count as “real” progress

Early in their career, scientists choose to pursue AI methods that seemed promising, and interesting. They go on to develop the priors and beliefs for why it could eventually work. For example, neuroscientists working in AI usually contemplate systems significantly more complex than deep learning. These experts believe you need to apply deep theories to models to make truly intelligent systems. The brain is their mandatory guide to designing artificial intelligence.

Most people in machine learning also don’t see scale as basically enough for AGI – many don’t even think that’s a possibility. While they view scale as a meaningful concept, most researchers think it’s just a tool to use along the way. Through this perspective, they are actively implying a need for fundamentally new architectures. That you can’t just scale Transformers to AGI.

ChatGPT, however, ignores all of that.. After the model’s performance stalls, the parameter count gets raised, more compute gets allocated, more engineering optimizations are deployed, even deeper recesses of the internet and even cringier Reddit memes get scraped, rinse, and repeat. The process remains straightforward and formulaic.

And once-in-a-while someone kinda wonders “what if I ask the model to think step-by-step?” and improves the model’s performance more than all scientists combined in several years of generating their novelty-maxxing NeurIPS papers.

Many experts see simple models and an inherent lack of complex theory as nonsense and statistical noise that “appears” intelligent.

They often focus on raw numbers, linear trendlines and ignore the story of the model’s emergent trends. Thus, many believe these systems will always falter on real tests of intelligence and fundamentally can’t become AGI.

Most experts, then, will simply not update their AI timelines from scaling progress.

Academics can’t build AI timelines when they don’t have access to FAANG’s AI frontier

Researchers need to be probing the world’s best AI systems to understand their abilities and predict AI’s future. This would include testing things like how scale improves a system, the ways reasoning ability changes over time and how these models compare to humans.

Unfortunately, most academics don’t have the models necessary for that critical work.

Due to a lack of funding and no access to industry’s cutting-edge AI, researchers' options are heavily restricted. They’re left with small toy models and older public models like GPT – soon to be out of date. GPT-3 might not strike people as old, and out of date. Instead, they might see it as the biggest leading model. And that’s a reasonable thing to think, but it’s just an illusion – given that ChatGPT is almost a year old.

There are always more powerful systems built and tested months before release. Academics can’t predict scale and deep learning’s actual limitations or capabilities without direct access to the powerful, secretive models to be released any moment by OpenAI, DeepMind or Google Brain.

GPT-3 and ChatGPT provide but a glimpse of what current AI systems can do.

FAANG restricts experts' research desires meaning many will never switch over and experience the AI frontier

Academics choose to pursue a long academic path for a reason. Fundamentally, they want to express their creativity and freedom within a research area that suits their intellectual curiosities – to ask big questions and find answers in thriving research communities.

AI’s new key holder, FAANG, restricts these desires through their binding responsibilities and directives. On a moment’s notice, a system you hardly understand and built by dozens of people you barely know, could become your responsibility to fix. You’re often separated from the craft of building from the ground up and instead serve to fulfill a company’s wish.

While many people are content with this and their increasingly thicker wallets, others are losing a type of magic. The type with big theories, a sense of independence, and the recognition of accomplishment promised by academia. Some avoid the switch because scholarly crowds might view it as less “prestigious.” And even if you work on amazing industry AI, companies don’t always publish their research – meaning your multi-year project may not see the light of day and nobody will ever cite you by name.

All of this means many researchers will not step away from academia to go full-time at corporations and they will never get access to FAANG’s cutting-edge AI frontier, left with ChatGPT and psets they assign to hapless undergrads to test it on.

Experts base their predictions on the past, not the future & struggle to orient to the new, rapid pace of AI innovation

Sudden changes are a struggle for experts in any line of work. This is because people bias their predictions toward the average of “their” work experience instead of priors that shift with newer trends. And machine learning scientists are in a much different field now than they were 10 or even 3-5 years ago.

In the past, challenges may have seemed daunting given the few engineers and slow pace of innovation. Every bottleneck, bug, and error risked delaying a system by months. This built their first strong impression that machine learning was a “slow moving” field. Most of those running academic AI labs grew up during the time when neural networks barely worked at all.

This was before they had entire industries designing hardware to accelerate their AI. When they didn’t have ten thousand FAANG scientists dedicated to solving key challenges in machine learning. And now FAANG are willing to invest big into their singular, premier AI groups. Ten billion dollars-big in Microsoft’s case.

Unable to zoom out, experts will keep extrapolating their historic experiences linearly & if the field is driven by exponentials they might actually be the last ones to notice an impending AI explosion coming and wake up one day to find that AI killed everyone oops, sorry, I meant everyone’s h-indexes.

Conclusion

Artificial intelligence’s lightning-fast pace is disconcerting. Experts stand in a direct conflict between the field they built their lives around and how its rapid innovation could threaten everything they’ve worked for. With pressures mounting, a great deal don’t want to actually stay caught up and look through a clear lens.

Experts can either accept that they may be about to lose everything or continue saying AGI is far away. Overwhelmingly, they choose the latter option.