Assessing Generative AI: The Next Frontier for Brazilian Startups
TL;DR
- The 2017 “Attention Is All You Need” research paper was a major breakthrough in the field of natural language processing (NLP). The paper introduced a new way to build neural networks called the Transformer, which opened up the opportunity for several LLM models to emerge;
- Right now, it seems that Generative AI may be slightly past the Peak of Inflated Expectations phase and heading towards the Trough of Disillusionment in the Hype Cycle;
- The most valuable generative AI applications appear to solve domain-specific problems that boost human labor rather than replace it;
- Entrepreneurs should think about how to build a focused product and enforce a defensible moat. This could be in the form of a strong brand reputation, unmatched security features, network effects, or proprietary IP;
- Opportunities for Brazilian startups are mostly in application layers, taking advantage of The Home-Field Advantage (crafting a solution that deeply resonates with local needs) and Affordable Agility (creating tech solutions in Brazil at a significantly lower cost when compared to the US);
In the ever-oscillating tech industry, there are few moments as transformative as witnessing a new technology transition from nascent wonder to ubiquity. Generative AI, a mere whisper in hushed tech circles two years ago, has surged forward, causing a brand new deep learning renaissance and making the AI winters of the past feel like distant memories.
We now exist in an era where generative diffusion models and Transformer-powered large language models (LLMS) have not just emerged but have begun defining the zeitgeist in the tech and VC spaces.
Generative AI products are no longer confined to the boundaries of niche tech enthusiasts. The cinematic AI image generator Midjourney has more than 14 million users on its Discord server and search interest for the service exploded 16,364% in the past 12 months. OpenAI’s popular chatbot ChatGPT is the fastest-growing consumer product ever, growing to 1 million users in 5 days and 100 million users merely two months after launching. Transformative products like these have democratized AI, allowing us to use this advanced technology with only natural language. Gone are the days when the magic of machine learning could only be accessed in a research lab, under layers of complex code.
Generative AI, at its core, develops new content, drawing from existing patterns. The technology’s potential is global, yet every region, every country, brings its unique spin to this sphere. What could Brazil’s role be in this space? What opportunities can the Brazilian innovation ecosystem take advantage of to become competitive with global players? To answer these questions, we have to contextualize the Generative AI landscape today and analyze how we got to this point in AI development:
A Brief History of Artificial Intelligence
The history of AI is a story of progress and setbacks, tracing back to the mid 20th century when Alan Turing proposed the Turing Test as a way to measure whether or not a machine could be considered intelligent. In the 1950s, AI research began to take off, with the development of early rule-based systems like the Logic Theorist by Allen Newell, widely regarded as the first AI program.
In the 1960s, AI researcher Frank Rosenblatt and his colleagues developed the Perceptron, a simple neural network that could learn to classify patterns. This was a major breakthrough in AI, as it could learn from data and become the backbone of large-scale neural networks later on. Increased funding from US government agencies, such as DARPA, also led to a boom in the research and development of early machine learning systems across academia and defense at this time. One of the areas of focus for the US government were programs that could transcribe, translate, and modify human language. One such program was ELIZA, an early chatbot program developed by MIT researcher Joseph Weizenbaum in 1966.
In the 1970s, AI research experienced its first “winter.” A number of factors, including lack of data, deficiencies in computing power, and reduced funding, caused AI research to go through a period of decline. However, in the 1980s, AI research made a comeback, with the popularization of backpropagation (a method for training neural networks), introduction of expert systems (AI programs used to simulate human behavior and judgment), increased funding, and new commercial applications.
In the 1990s and 2000s, AI research continued to advance, with the development of more powerful AI programs, such as Deep Blue, and models, such as neural networks. Neural networks are able to learn from much more data than previous neural networks, and they are used today to achieve state-of-the-art results in a number of AI tasks, including image recognition, natural language processing, and speech recognition. By the 2010s, deep learning became the dominant paradigm in AI research. All of the advances in generative models that have enabled this inflection point in technology come out of this paradigm.
An Overview of Generative Models
One of the earliest generative models released were Generative Adversarial Networks (GANs), developed by Ian Goodfellow in 2014. GANs can be used to generate realistic images, text, and music. They work by pitting two neural networks against each other: a generator and a discriminator. The generator tries to create realistic outputs, while the discriminator tries to distinguish between real and fake outputs. This process of competition drives the generator to become better and better at generating realistic images, text, and audio. However, GANs were hard to train and even harder to scale in the mid 2010s and thus any large-scale commercial applications were limited.
GAN Model Diagram
Source: Google Developers
The 2017 “Attention Is All You Need” research paper by Vaswani et al. was a major breakthrough in the field of natural language processing (NLP). The paper introduced a new way to build neural networks called the Transformer. Compared to previous AI models, Transformers are better at understanding and generating long sequences of text or code. This is because they use a series of “attention” mechanisms to learn how different parts of a sequence relate to each other. This made it possible for the Transformer to achieve state-of-the-art results on a variety of NLP tasks, including machine translation, text summarization, and question answering.
The rise of the Transformers led to the development of Large Language Models (LLMs), which are a type of neural network that can be used to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. LLMs are trained on massive datasets of text and code, and they can be used to generate text that is indistinguishable from human-written text.
Latent diffusion models are another type of generative AI model that has been gaining popularity in recent years. Latent diffusion models are a type of neural network that can be used to generate images that slowly evolve from random noise to a complex final image. This makes them well-suited for generating images that are realistic but also creative and unique.
Key Generative AI Players and Investors
As we navigate the intricate tapestry of generative AI advancements, it’s clear that certain trailblazers are setting the pace, not merely keeping up. Steering the direction of our AI-empowered present, these are the key Generative AI players and investors propelling the space forward.
The major established companies involved in the generative AI space include:
- OpenAI is a “capped” for-profit research company that developed ChatGPT, the DALL-E image generator, and the GPT-4 LLM (costing hundreds of millions of dollars to train!)
- Google Brain is the deep learning research division at Google. They oversee the development of TensorFlow, their open-source AI model development library. They also developed LaMDA, a conversational LLM, and work to integrate it into Google and YouTube’s products
- Meta AI Research is Meta’s AI research lab that develops different AI services for their social and virtual reality products. They oversee the development of PyTorch, an open-source AI model development library. They also developed LLaMa, an open-source foundational language model
- Microsoft Research is the AI research wing of Microsoft whose primary focus is partnering with OpenAI and integrating their models into Bing and Office products. Earlier this year, Microsoft invested $10B into OpenAI.
It should be noted that the major companies offer infrastructure services in the form of model APIs and open source libraries. Their projects have laid down much of the industry’s groundwork.
Meanwhile, the major burgeoning startups involved in the generative AI space include:
- Cohere is an AI infrastructure startup that provides access to advanced large language models and NLP tools ($2B valuation)
- Hugging Face is essentially the GitHub of machine learning. They host an open-source library for customers to build, train, and deploy AI models ($2B valuation)
- Scale provides a platform to label data for training machine learning models. They also provide other services for enterprises to use during the machine learning development pipeline ($7.3B valuation)
- Midjourney provides an independent generative text-to-image model that can generate high-quality images based on user prompts
- Stability AI hosts an open-source generative text-to-image model that can generate high-quality picturesque images with prompts ($1B valuation)
- Anthropic is an AI safety and research company that is developing general AI systems and language models ($4.1B valuation)
- Character AI provides a chatbot service that can generate human-like text contextual conversation from the perspective of user-generated characters ($1B valuation)
- Jasper AI is an AI marketing “co-pilot” that helps teams create content for copywriting, advertising, and branding ($1.5B valuation)
- Glean offers a generative-AI-powered enterprise search and knowledge discovery platform ($1B valuation)
- Adept is a private research lab that is building general artificial intelligence (AGI) ($1B valuation)
The major startups in generative AI, unlike the major companies, are a mix of infrastructure and application layer services. Cohere, Hugging Face, Scale, and Adept offer infrastructure services, whereas Midjourney, Stability, Anthropic, Character, Jasper, and Glean offer application layer services.
Source: Dealroom
Major investors in generative AI, according to CB Insights, include:
- Sequoia Capital has invested in OpenAI, Hugging Face, and Glean. They have also invested in smaller generative AI startups like Harvey and are invested in companies that are integrating generative AI into their products, such as Notion
- Tiger Global has notably invested in Cohere, Scale, and Pinecone (a vector database for AI applications)
- Khosla Ventures has invested in OpenAI. They have also invested in Regie.ai and Analog Inference
- Coatue Management has invested Stability AI, Scale, and Runway AI
- a16z notably invested in Character.AI and Pinecone
- Addition has been a lead investor in Adept, Hugging Face as well as Primer
- Spark Capital has been a lead investor for Adept and Anthropic
- YCombinator is a startup accelerator that has attracted dozens of new generative AI companies, most notably Jasper and Baselit. In their most recent Summer 2023 cohort, over 35% of startups were generative AI focused. YC was also an investor in Scale and OpenAI
The Dangers of the Hype Cycle
Amidst the rise of key players in the generative AI arena and despite the rapid adoption of generative AI by ordinary people, it’s imperative to discern substance from spectacle. As this technology captivates the masses, often blurring lines between marvel and mirage, our challenge lies in establishing a cautious optimism towards it. To navigate this nuanced landscape, the Hype Cycle concept might help us in this task.
Source: Gartner
The Hype Cycle is a graphical representation of the maturity and adoption of new technologies developed by Gartner research. It is divided into five phases:
- Innovation Trigger: A new technology is first introduced and there is a lot of excitement and speculation about its potential
- Peak of Inflated Expectations: The hype around the technology reaches a peak and people start to believe that it can solve all of the world’s problems
- Trough of Disillusionment: People start to realize that the technology is not as perfect as they thought it was and there are a number of challenges that need to be solved before it can be widely adopted
- Slope of Enlightenment: The technology starts to mature and people start to see the real-world benefits that it can offer
- Plateau of Productivity: This is the phase where the technology is widely adopted and used to solve real-world problems
Right now, it seems that Generative AI may be slightly past the Peak of Inflated Expectations phase and heading toward the Trough of Disillusionment. There has been a lot of excitement and hype about the potential of generative AI, but this hype may have also caused inflated expectations and questionable valuations.
The AI hype seems to have exploded following the release of ChatGPT. Stories soon popped up all over social media about the amazing things they were able to do with it. As a result, a huge number of startups using gen AI attracted billions of dollars of VC funding. Some startups, such as Midjourney or Glean, solve real pain points that provide enduring value in critical business applications. Other startups appear to be thin wrappers over GPT-4 and might not be the best at providing real value to users.
This can be exemplified by the case of Jasper AI. In 2022, Jasper AI launched an AI content creation tool to help businesses create content for social media, advertising, articles, and websites. Their product was built off OpenAI’s GPT-3 model and they were an official partner of OpenAI. In October 2022, Jasper raised $125 million at a staggering $1.5 billion valuation. However, one month later, ChatGPT was released and its explosive growth and improved text generation blew Jasper’s offerings out of the water. Since then, Jasper seems to be struggling to pivot. In July 2023, they even laid off an unknown amount of their workforce. Jasper seems to have underestimated the abilities of general-purpose LLMs like ChatGPT and it appears their product quickly became extraneous and lower-quality compared to ChatGPT. Jasper’s reliance on OpenAI should have been a red flag to investors.
The initial hype around the technology seems to have instilled a notion among users, entrepreneurs, and VCs that generative AI’s use cases are broader than they actually are and that the responses generated by these models are reliable or always accurate. As the hype cycle seems to be moving closer to the Trough of Disillusionment, entrepreneurs, users, and investors have found out this is not the case. The most valuable generative AI applications appear to solve domain-specific problems that boost human labor rather than replace it.
Proprietary LLMs, such as GPT-4, are not satisfactory for domain-specific problems. They are too generalist in their training and responses. Smaller architectures, such as Meta’s open source LLaMa model can be more efficient for developers and much easier for distribution. This is because the model is open source and can be optimized or integrated into an existing tech stack more easily compared to using a proprietary API. Distribution is also easier because it can be easily integrated, and it is cheaper to develop on compared to building a model from scratch.
Smaller model architectures and larger, more focused datasets are necessary to solve domain-specific problems. Smaller AI models are agile, easier to train, and quicker to deploy. Meanwhile, larger, focused datasets used to train the model make it more accurate and more valuable to customers. This indicates that the most valuable opportunities for gen AI could be initially dominated by large, established enterprises that have collected large proprietary datasets to train on.
For entrepreneurs, this means generative AI startups should focus on more niche applications rather than offering a general-purpose tool. If generative AI is the core of their product, being a thin wrapper over GPT-4 and solving a problem halfway won’t cut it when users can just use ChatGPT. Entrepreneurs should think about how to build a focused product that is better than consumer LLMs and enforce a defensible moat. If generative AI is a feature of their product, it shouldn’t be forced just because it is the “shiny new thing.” Generative AI features should only be implemented when they enhance the workflow of the product and save the customers’ time or money.
Generative AI’s Role in Brazil
As generative AI continues to grow and mature, it is poised to have a major impact on a wide range of industries. Brazil is well-positioned to play a significant role in this space, with a large population of tech-savvy people and a growing startup ecosystem.
Advantages and Disadvantages of Brazilian Generative AI Startups
The most prevalent advantages and opportunities that new generative AI startups in Brazil can benefit from are:
The Home-Field Advantage / First-Mover Advantage
Because generative AI’s impact is global, understanding the cultural and regulatory intricacies is paramount. Brazilian entrepreneurs possess an intrinsic knowledge of their “home turf.” This extends beyond just understanding cultural nuances; it encompasses the legal frameworks, regulatory hurdles, and unique business challenges that are inherent to Brazil.
Where an American AI company might grapple with understanding the needs of the Brazilian market or navigating its legal maze, a Brazilian startup can move with the ease of familiarity. Their innate understanding of local dynamics provides them an edge, enabling them to tailor their offerings in a way that resonates more deeply with local businesses. One good example of a startup taking advantage of this is our portfolio company Lexter.ai. Their team has a unique insight into the Brazilian law system and can cater to the local needs of Brazilian law firms in a way that US startups like Harvey, Casetext, or Ironclad cannot.
In addition to having a home-field advantage, Brazilian startups could also have a first-mover advantage. If they are the first to offer a generative AI product to Brazilian customers, those customers might be less likely to use a similar foreign product when it enters the Brazilian market.
Affordable Agility
Silicon Valley is often lauded as the hub of innovation, but this prestige comes with a hefty price tag. The average salary for a software engineer in the San Francisco Bay Area is $172,444 USD according to Glassdoor. In contrast, cities like São Paulo are more economical for Brazilian startups compared to US tech hubs. The average salary for a software engineer in São Paulo is only $51,000 USD, also according to the same source.
In addition to lower labor costs compared to their counterparts in the US, operational costs are markedly reduced. This financial flexibility means Brazilian startups can channel their funds more into product development, customer acquisition, and iterative improvements. They’re not shackled by exorbitant overheads, allowing them to prioritize growth and invest strategically in areas that can yield better returns.
Application Layer Dominance
The generative AI space, particularly in the US, is heavily skewed towards the infrastructure layer—a domain that demands colossal investments, often running into billions of dollars. As Konstantine Buhler of Sequoia aptly noted, the infrastructure realm is indeed “fat.”
For Brazilian startups, competing here might be an uphill task, given the capital-intensive nature of this layer and the fact that AI’s reach is mostly geographically agnostic. However, this also presents a unique opportunity. By pivoting towards the application layer, Brazilian startups can craft solutions tailored for specific local challenges and contexts, just as Lexter and Cloud Humans are developing, in the Brazilian legal and CX realms successively.
While infrastructure may be universally applicable given that there are almost no geographical barriers to this technology, applications can be region-specific and more defensible, allowing Brazilian enterprises to carve a niche for themselves, ensuring their offerings are both relevant and resilient.
Some significant disadvantages and risks that could threaten new generative AI startups in Brazil are:
Third-Party Infrastructure Risk
The infrastructure layer, the core upon which generative AI functions, sees big, well-funded companies holding a strong advantage. However, anchoring on third-party infrastructure services can be a blessing and a curse. While on one hand, it grants Brazilian startups access to cutting-edge tools and models pioneered by established firms, it also introduces a potential vulnerability. These companies, should they perceive a Brazilian startup as competition or pivot their strategies, have the power to pull the rug out, by either severing access to their APIs or introducing competing solutions. Moreover, infrastructure options may be unable to meet all the demand for applications, such is the case for Microsoft. This could lead to infrastructure providers prioritizing US customers who spend more over Brazilian ones. These risks could jeopardize the very foundation on which these startups operate.
Emphasis on decentralized, open-source models could be a remedy to this vulnerability. By prioritizing systems that aren’t dictated by a handful of powerful entities, Brazilian startups can mitigate this risk and ensure greater autonomy.
Smaller Funding Ecosystem / VC Insulation
Silicon Valley, with its thriving venture capital scene, often sets the global gold standard for startup funding. Brazil, in comparison, offers a more nascent VC landscape. The pot of funds is smaller, valuations tend to be more conservative, and deal closures can be challenging. Moreover, the geographical distance and perhaps the lure of more mature markets have resulted in US VCs traditionally overlooking Brazil in favor of regions like Asia and Europe.
For Brazilian startups, the silver lining here is twofold: First, they must operate lean, concentrating on traction and early revenue generation to enhance their attractiveness for later, larger funding rounds. Secondly, by demonstrating resilience and agility, they can position themselves as robust investment opportunities for both local and international investors, gradually bridging the funding chasm.
Lower Geographic Barriers
The democratization of technology means borders are blurring. While this interconnectedness is a boon, it also spells potential threats for Brazilian entrepreneurs. American companies, fortified by extensive funding and resources, could, with time and intent, tailor their offerings for the Brazilian market, challenging local startups at their own game.
However, this global fluidity is a two-way street. If US enterprises can eye Brazil as a potential market, Brazilian startups too, can harness this lack of barriers to explore neighboring Latin American markets, or even make inroads into the US. While competition will be fierce, the right strategies, underpinned by a deep understanding of local needs and cultural nuances, can help Brazilian startups not just survive, but thrive.
Key Takeaways for VCs and Entrepreneurs
Localized Solutions are Gold
For Brazilian entrepreneurs, Brazil’s cultural, regulatory, and business nuances are intricacies that only local players truly take advantage of. It’s not about selling a flashy generative AI product; it’s about crafting a solution that deeply resonates with local needs. It’s essential to build products that understand, cater to, and evolve with Brazil’s unique requirements.
For Brazilian VCs, investments should lean towards startups that present a genuine understanding of the Brazilian landscape. Such startups could be more likely to achieve product-market fit, scale faster, and offer a higher return on investment.
Cross-Pollination and Focus are Beneficial
For Brazilian entrepreneurs, Brazil’s corporate behemoths (Petrobras, Bradesco, Itau, JBS), while rich in proprietary data, often lack the nimbleness or expertise to innovate in the AI domain. This presents a lucrative opportunity. Startups can forge symbiotic partnerships with these entities, offering AI expertise in exchange for invaluable data access. This not only accelerates the product development cycle but also offers a competitive edge. Moreover, startups should maintain a laser focus on their core competencies instead of trying to be a general-purpose solution. This ensures they deliver unmatched value in their chosen domains.
Brazilian VCs should seek startups that understand the power of collaboration and are keen to explore synergies with established players. Such startups not only reduce their risk profile through strategic partnerships but also position themselves for accelerated adoption and traction.
Prioritize Enduring Value and Defendable Moats
For Brazilian entrepreneurs, the allure of generative AI models, such as GPT-4, is undeniable. However, building products that merely wrap around such models without addressing tangible problems is a short-lived and undefendable strategy. It’s crucial to build solutions that address real-world challenges, ensuring the product’s relevance and longevity.
Furthermore, in a space as dynamic as generative AI, the only sustainable advantage is a defensible moat. This could be in the form of a strong brand reputation, unmatched security features, network effects, or proprietary intellectual property.
For Brazilian VCs, look beyond the flash and hype of AI-powered startups. Instead, focus on those committed to delivering enduring value. Additionally, prioritize startups that have, or are in the process of building, strong protective moats. Such firms not only promise better longevity but also ensure higher resistance against competitive forces, promising better long-term yields.
In Conclusion
Generative AI in Brazil is poised at an exciting crossroads. The dynamism and agility that Brazilian startups bring, coupled with a keen understanding of local nuances, place them in a unique spot. With the right pushes, there’s little doubt that this burgeoning innovation ecosystem will not only adopt generative AI but play a major role in its global trajectory.
About the Author
Alex Mcneilly is a MIT student studying computer science and economics. His academic focus is on AI, computer graphics, and human-computer interaction (HCI). During Summer 2023, he was an intern at Grão VC.