From top, left to right: American news magazineTime cover featuring a ChatGPT conversation; mechanical dove image created in Midjourney; AlphaFold 2 performance, experiments, and architecture.
This article needs to be updated. The reason given is: Missing key events from after 2022, not enough data from after 2021. Please help update this article to reflect recent events or newly available information.(June 2025)
The number of Google searches for the term "AI" accelerated in 2022.
In 2018, the Artificial Intelligence Index, an initiative from Stanford University, reported a global explosion of commercial and research efforts in AI. Europe published the largest number of papers in the field that year, followed by China and North America.[10] Technologies such as AlphaFold led to more accurate predictions of protein folding and improved the process of drug development.[11] Economists and lawmakers began to discuss the potential impact of AI more frequently.[12][13]
The release of ChatGPT in November 2022, a chatbot based on a large language model created by OpenAI, accelerated the pace of AI boom.[14] ChatGPT had over 100 million users in two months, and according to investiment bank UBS, was the fastest-growing consumer software application in history.[15][16] Several other companies have released competitors. At a similar time, text-to-image-models such as DALL-E and Midjourney become popular as a way to generate complicated photo-like illustrations.[17]Speech synthesis software also became able to replicate the voices and speech of specific people.[18]
According to metrics from 2017 to 2021, the United States outranks the rest of the world in terms of venture capital funding, the number of startups, and patents granted in AI.[19][20] Scientists who have immigrated to the U.S. play an outsized role in the country's development of AI technology.[21][22] Many of them were educated in China, prompting debates about national security concerns amid worsening relations between the two countries.[23]
An image generated by Stable Diffusion based on the text prompt "a photograph of an astronaut riding a horse"
Text-to-image models captured widespread public attention when OpenAI announced DALL-E, a transformer system, in January 2021.[33] A successor capable of generating complex and realistic images, DALL-E 2, was unveiled in April 2022.[34] An alternative text-to-image model, Midjourney, was released in July 2022.[35] Another alternative, open-source model Stable Diffusion, released in August 2022.[36]
GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text.[43] The tool has been credited with spurring and accelerating the AI boom following its release.[44][45][46] An upgraded version called GPT-3.5 was used in ChatGPT, which later garnered attention for its detailed responses and articulate answers across many domains of knowledge.[47] A new version called GPT-4 was released on March 14, 2023, and was used in the Microsoft Bing search engine.[48][49] Other language models have been released, such as PaLM and Gemini by Google[50] and LLaMA by Meta Platforms.
15.ai, a free text-to-speech web application launched in March 2020, was an early development in the AI boom that used AI for voice synthesis. The platform could generate convincing character voices using as little as 15 seconds of training data.[53] The application gained widespread attention in early 2021 for its ability to synthesize emotionally expressive speech from popular fictional characters,[54][55] becoming particularly influential in online content creation.[56][57]
By April 2024, full length original songs generated by a pseudonymous creator named "Glorb" using the voices of cartoon characters from the SpongeBob SquarePants cartoon were being listened to millions of times on Spotify and YouTube. Glorb is not affiliated with the copyright holder or the original voice performers.[58][59]
ElevenLabs allowed users to upload voice samples and create audio that sounds similar to the samples. The company was criticized[60] after controversial[61][62] statements were generated based on the vocal styles of celebrities, public officials, and other famous individuals,[63] raising concerns[among whom?] that the technology could make deepfakes even more convincing.[64] An unofficial song known as "Heart on My Sleeve", created using the voices of musicians Drake and The Weeknd raised questions[among whom?] about the ethics and legality of similar software.[65]
Electricity consumed by hardware used for AI has increased demands on power grids, which has led to prolonged use of fossil fuel power plants which would otherwise have been deactivated.[66][67][68]
Microsoft, Google, and Amazon have all invested in existing or proposed nuclear power plants to meet these demands.[69][70] In September 2024, Microsoft signed a deal with Constellation Energy to purchase power from a reactor at Three Mile Island which had been shut down in 2019. The reactor is set to reopen in 2028 to provide power to Microsoft's data centers. The reactor is next to the unit which caused the worst nuclear power accident in US history in 1979.[71][72][73]
During the AI boom, different groups emerged, ranging from the ones that want to accelerate AI development as quickly as possible to those that are more concerned about AI safety and would like to "decelerate".[74] According to a survey published in April 2025 by Pew Research Center, 43% of American adults thought that AI technology was more likely to harm themselves in the future, while 24% thought that AI would was more likely to benefit themselves in the future. Women were more likely than men to be concerned about AI technology.[75]
In 2024, AI patents in China and the U.S. numbered more than three-fourths of AI patents worldwide.[76] Though China had more AI patents, the U.S. had 35% more patents per AI patent-applicant company than China.[76]
Some economists have been optimistic about the potential of the current wave of AI to boost productivity and economic growth. Notably, Stanford University economist Erik Brynjolfsson, in a series of articles has argued for an "AI-powered Productivity Boom"[77] and a "Coming Productivity Boom".[78] At the same time, others like Northwestern University economist Robert Gordon remain more pessimistic.[79] Brynjolfsson and Gordon have made a formal bet, registered at long bets, about the rate of productivity growth in the 2020s, to be resolved at the end of the decade.[80]
Big Tech companies view the AI boom as both opportunity and threat; Alphabet's Google, for example, realized that ChatGPT could be an innovator's dilemma-like replacement for Google Search. The company merged DeepMind and Google Brain, a rival internal unit, to accelerate its AI research.[81]
In 2023, San Francisco's population increased for the first time in years, with the boom cited as a contributing factor.[83]
Machine learning resources, hardware or software can be bought and licensed off-the-shelf or as cloud platform services.[84] This enables wide and publicly available uses, spreading AI skills.[84] Over half of businesses consider AI to be a top organizational priority and to be the most crucial technological advancement in many decades.[85]
Across industries, generative AI tools are becoming widely available through the AI boom and are increasingly used in businesses across regions.[86] A main area of use is data analytics. Seen as an incremental change, machine learning improves industry performance.[87] Businesses report AI to be most useful in increased process efficiency, improved decision-making and strengthening of existing services and products.[88] Through adoption, AI has already positively influenced revenue generation in multiple business functions. Businesses have experienced revenue increases of up to 16%, mainly in manufacturing, risk management and research and development.[86]
AI and generative AI investments have been increasing with the boom, increasing from $18 billion in 2014 to $119 billion in 2021. Most notably, the share of generative AI investments was around 30% in 2023.[89] Further, generative AI businesses have seen considerable venture capital investments even though regulatory and economic outlooks remain in question.[90]
Tech giants capture the bulk of the monetary gains from AI and act as major suppliers to or customers of private users and other businesses.[91][92]
Inaccuracy, cybersecurity and intellectual property infringement are considered to be the main risks associated with the boom, although not many actively attempt to mitigate the risk.[86] Large language models have been criticized for reproducing biases inherited from their training data, including discriminatory biases related to ethnicity or gender.[93] As a dual-use technology, AI carries risks of misuse by malicious actors.[94] As AI becomes more sophisticated, it may eventually become cheaper and more efficient than human workers, which could cause technological unemployment and a transition period of economic turmoil.[95][12] Public reaction to the AI boom has been mixed, with some hailing the new possibilities that AI creates, its sophistication and potential for benefiting humanity;[96][97] while others denounced it for threatening job security[98][99] and for giving 'uncanny' or flawed responses.[100]
Tech companies such as Meta, OpenAI and Nvidia have been sued by artists, writers, journalists, and software developers for using their work to train AI models.[107][108] Early generative AI chatbots, such as the GPT-1, used the BookCorpus, and books are still the best source of training data for producing high-quality language models. ChatGPT aroused suspicion that its sources included libraries of pirated content after the chatbot produced detailed summaries of every part of Sarah Silverman's The Bedwetter and verbatim excerpts of paywalled content from The New York Times.[109][110] In protest of the UK government holding consultations on how copyrighted music can legally be used to train AI models,[111] more than a thousand British musicians released an album with no sounds, entitled Is This What We Want?[112]
A Voice of America video covering potential dangers of AI-generated impersonation, and laws passed in California to combat it
The ability to generate convincing, personalized messages as well as realistic images may facilitate large-scale misinformation, manipulation, and propaganda.[113]
On May 20, 2024, following the release of a demo of updates to OpenAI's ChatGPT Voice Mode feature a week earlier,[116][117] actor Scarlett Johansson issued a statement[118][119] in relation to the "Sky" voice shown in the demo, accusing OpenAI of producing it to be very similar to her own, and her portrayal of the artificial intelligence voice assistant Samantha in the film Her (2013), despite Johansson refusing an earlier offer from the company to provide her voice for the system. The agent of the unnamed voice actress who voiced Sky stated that she had recorded her lines in her natural speaking voice and that OpenAI had not mentioned the movie Her nor Johansson.[120][121]
Several incidents involving sharing of non-consensual deepfake pornography have occurred. In late January 2024, deepfake images of American musician Taylor Swift proliferated. Several experts have warned that deepfake pornography is more quickly created and disseminated, due to the relative ease of using the technology.[122] Canada introduced federal legislation targeting sharing of non-consensual sexually explicit AI-generated photos; most provinces already had such laws.[123] In the United States, the DEFIANCE Act was introduced in March 2024.[124]
A large amount of electricity is needed to power generative AI products,[125] making it more difficult for companies to achieve net zero emissions. From 2019 to 2024, Google's greenhouse gas emissions increased by 50%.[126]
AI is expected by researchers of the Center for AI Safety to improve the "accessibility, success rate, scale, speed, stealth and potency of cyberattacks", potentially causing "significant geopolitical turbulence" if it reinforces attack more than defense.[94][127] Concerns have been raised about the potential capability of future AI systems to engineer particularly lethal and contagious pathogens.[128][129]
The AI boom is said to have started an arms race in which large companies are competing against each other to have the most powerful AI model on the market, with speed and profit prioritized over safety and user protection.[130][131][132]
Rapid progress in artificial intelligence has also sparked interest in whether some future AI systems will be sentient or otherwise worthy of moral consideration,[133] and whether they should be granted rights.[134]
^Kurosawa, Yuki (January 19, 2021). "ゲームキャラ音声読み上げソフト「15.ai」公開中。『Undertale』や『Portal』のキャラに好きなセリフを言ってもらえる" [Game Character Voice Reading Software "15.ai" Now Available. Get Characters from Undertale and Portal to Say Your Desired Lines]. AUTOMATON (in Japanese). Archived from the original on January 19, 2021. Retrieved December 18, 2024.