The ‘Age of Uncertainty’ with AI

In the past, eras of quick development and change have brought about times of enormous uncertainty. In his 1977 book The Age of Uncertainty, Harvard economist John Kenneth Galbraith described the achievements of market economics but also foresaw a time of instability, inefficiency, and social injustice.

We are already living in an era that is characterized by comparable uncertainties as we navigate the revolutionary waves of AI. The continuous march of technology, particularly the emergence and development of AI, is what’s driving things this time around rather than just economics.

The expanding reach of AI

Already, it is easier to see how AI is affecting daily life. The technology is starting to permeate our daily lives, with Shakespearean-inspired haikus, AI-generated melodies, self-driving cars, chatbots that can impersonate missing loved ones, and AI assistants that aid us at work.

The coming AI wave means that AI will soon be considerably more common. Professor Ethan Mollick of the Wharton School recently published about the findings of an experiment on the future of professional work. The Boston Consultant Group employed two different groups of consultants for the trial. Each group was assigned a variety of common duties. One group was able to supplement their efforts with currently available AI, while the other was unable to do so.

According to Mollick, consultants who used AI completed tasks 12.2% more often on average, did so 25.1% faster, and generated results that were 40% higher in quality than those who did not.

Of course, it is always possible that issues with large language models (LLM) like bias and confabulation will simply lead this wave to disappear, but this is currently seems unlikely. It will be a while before we can truly grasp the magnitude of the tsunami, despite the fact that technology is already exhibiting its disruptive potential. Here is a preview of upcoming events.

The next wave of AI models

When compared to the current crop of LLMs, which includes GPT-4 (OpenAI), PaLM 2 (Google), LLaMA (Meta), and Claude 2 (Anthropic), the new generation will be more advanced and generalized. It’s possible that Elon Musk’s new start-up, xAI, will likewise enter a brand-new and potentially extremely strong model. For these models, thinking, common sense, and judgement continue to be major obstacles. However, we may anticipate advancement in each of these areas.

The Wall Street Journal said that among the following generation, Meta is developing an LLM that will be at least as effective as GPT-4. This is anticipated to happen in 2024, the research states. Although they have been silent in revealing intentions, it is safe to assume that OpenAI is likewise working on their next generation. Probably not for very long.

The most important new model, according on data now available, is “Gemini” from the combined Google Brain and DeepMind AI team. Gemini might greatly outperform current technology. Sundar Pichai, the CEO of Alphabet, stated in May of last year that the model’s training was currently under way.

At the time, Pichai wrote in a blog post that although it was still early, they were already witnessing incredible multimodal capabilities that had not been seen in earlier versions.

Both text-based and image-based applications can be built on top of multimodal technology since it can process and comprehend a variety of data inputs, including both text and images. There may be more emergent or unexpected characteristics and behaviors as a result of the mention of capabilities not present in earlier models. The ability to write computer code is an emerging example from the current generation because it was not a predicted skill.

AI models that are a Swiss Army knife?

There have been rumors that Google provided early access to Gemini to a select number of businesses. SemiAnalysis, a reputable semiconductor research company, might be one of them. Gemini may be 5 to 20X more advanced than current GPT-4 devices.

The design of Gemini will probably be based on DeepMind’s Gato, which was unveiled in 2022. According to a report from last year, “the deep learning [Gato] transformer model is described as a ‘generalist agent’ and claims to carry out 604 different and largely routine activities with diverse modalities, observations, and action parameters. It’s been called the Swiss Army Knife of artificial intelligence models. It is unmistakably far more general than any AI systems created so far, and in that sense, it seems to be a step towards AGI.

Towards artificial general intelligence (AGI)

Microsoft claims that GPT-4 has already demonstrated “sparks of AGI” and is capable of solving complex problems in a variety of fields, including math, coding, vision, medicine, the law, psychology, and more, without the assistance of a human. Gemini could be a significant step towards AGI by superseding all current models. Gemini is expected to be distributed at several levels of model capability, possibly over the course of several months, maybe starting before the end of this year.

Gemini is sure to be spectacular, but even bigger and more advanced variants are anticipated. In an interview with The Economist, Mustafa Suleyman, the CEO and cofounder of Inflection AI and a cofounder of DeepMind, made the forecast that the frontier model companies — those of us at the cutting edge who are training the largest AI models — will train models that are over 1,000 times larger than what you see in GPT-4 within the next five years.

These models offer an unmatched range of possible uses and impacts on our daily lives, which opens the door to both huge advantages and increased risks. In Vanity Fair, David Chalmers, a professor of philosophy and neuro science at NYU, is quoted as saying: “The upsides for this are enormous. Maybe these systems find cures for diseases and solutions to problems like poverty and climate change, and those are enormous upsides.” The article also analyses the dangers and offers expert forecasts of terrible outcomes, such as the eventual annihilation of humanity, with likelihood ranges from 1% to 50%.

The end of human-dominated history?

Yuval Noah Harari, a historian, stated in a talk that these upcoming developments in AI will not be the end of history, but rather “the end of human-dominated history. Someone else will be in charge as history goes on. It more closely resembles an alien invasion.

Suleyman retorted that AI tools would lack agency and hence be limited to what humans gave them the authority to accomplish. This future AI may be “more intelligent than us,” Harari retorted. “How do you keep something smarter than you from developing agency?” With agency, an AI could take behaviors that are incompatible with human wants and ideals.

These next-generation models are the next step towards AGI and a future in which AI will be increasingly more capable, integrated, and useful in current life. While there are many reasons to be optimistic, these anticipated new developments lend fuel to calls for control and regulation.

The regulatory problem

Even the CEOs of corporations that create frontier models believe that regulation is required. On September 13th, after many of them spoke together before the United States Senate, it is reported that they “loosely endorsed the idea of government regulations” and that “there is little consensus on what regulation would look like.”

Senator Chuck Schumer organized the session, and afterwards he emphasized the difficulties in developing proper rules. He noted that AI is technically complex, constantly developing, and “has such a wide, broad effect across the entire world.

AI regulation may not even be feasible. For starters, much of the technology has been released as open-source software, which means it is effectively available for use by anybody. This alone might make many regulatory efforts difficult.

Precaution logical and sensical

Some argue that public pronouncements by AI leaders in support of regulatory theatrics are a form of regulation theatrics. According to report, Tom Siebel, a long-time Silicon Valley veteran and current CEO of C3 AI: AI executives are roping legislators in, urging them to regulate us. However, there is insufficient money and intellectual capital to secure the safety of millions of algorithms. They are well aware that it is impossible.

It may be impossible, but we must try. As Suleyman emphasized in his Economist interview, this is the time to embrace a precautionary approach, not out of fear, but simply as a reasonable, sensible way to go.

As AI rapidly evolves from limited capabilities to AGI, the promise is enormous, but the risks are severe. In this age of uncertainty, we must use our deepest conscience, knowledge, and care to create these AI technologies for the benefit of humanity while avoiding extreme potential risks.

Source link

 

Most Popular