Crypto and AI Overview

Download report
Download PDF

Both crypto and AI have begun rapidly disrupting multiple industries worldwide in their own right—crypto primarily through the introduction of decentralized financial systems and AI through task and decision-making automation. Interestingly, it is the intersection of these two fields that holds tremendous untapped potential. The promise of decentralized AI systems, driven by blockchain technology, could unlock new levels of transparency, security, and accessibility, transforming how we build and use AI. However, this path has come with its own challenges. The intersection of these two technologies is riddled with technical and economic obstacles that have made achieving competitive and efficient decentralized compute difficult.

This report explores the current state of AI and crypto, highlighting the key developments, technologies, challenges, and potential breakthroughs that will define this hybrid space in 2024 and beyond.

What is Artificial Intelligence (AI)?

The field of Artificial Intelligence (AI) is a rapidly evolving technology sector that aims to replicate human cognitive functions within machines, enabling them to learn from experience, adapt to new and changing situations, and execute tasks that normally would require human intelligence to complete accurately and efficiently. AI leverages complex algorithms and advanced data analysis to construct AI systems, allowing such systems to automate tasks, and some even actively improve their own performance. The latter is evident in an important core subset of AI called machine learning, which focuses on developing algorithms that allow computers to recognize patterns and make their own decisions based on data. 

AI models are trained to identify trends and predict outcomes via three components:

  • Training Data
  • Model Architecture
  • Model Parameters

The training data refers to any foundational information made available to the AI algorithm, providing it with examples from which the algorithm can learn. The model architecture defines the overall structure and layers of the model itself, while the parameters are fine-tuned bounds from which the algorithm can operate, optimizing accuracy over time.

AI algorithms need very large, processed datasets with specified training models in order to generate general solutions for specific use cases. After deployment, these models can be used for many types of practical applications. For example, within the cryptocurrency sector, AI has been leveraged for enhanced security, automated processes, and even improving user experiences by integrating data-driven insights and automated decision-making.

An illustrative example of how data is used in AI training. Source

Machine Learning

Machine Learning (ML) models are computational frameworks that execute tasks without being directly programmed to do so. ML models have streamlined operations through a systematic pipeline that encompasses data collection, training, and inference, allowing an ML model to not only perform a task but actively learn and improve at it.

Overall, there are three established methodologies for building ML models: supervised, unsupervised, and reinforcement learning - each of which comes with its own unique approach to data interpretation and educational strategy. 

  1. Supervised Learning: This strategy allows the developers to take the role of the “teacher,” in which they provide examples and guide an ML model during the learning process. For example, an ML model designed to identify dogs from images would receive a dataset labeled with pictures of dogs. The model’s objective would be to learn the distinguishing features of dogs to classify them, with direct guidance on the part of the developers being the deliberate choice of the labeled dataset.
  2. Unsupervised Learning: Unlike supervised learning, unsupervised learning does not rely on labeled datasets. Models, such as Large Language Models (LLMs) like GPT-4 and LLaMa, learn to identify patterns and relationships within the data autonomously. This form of learning is pivotal for understanding complex datasets without predefined categories or labels.
  3. Reinforcement Learning: This strategy includes a trial-and-error learning process approach, where the model learns to make decisions based on the feedback it receives from its environment. Reinforcement learning is most effective in sequential decision-making tasks, such as robotic control or strategic games like chess, where the model improves its performance based on the direct outcomes of its actions.

ML models are built and distributed through both open-source platforms (like Hugging Face’s model hub) and through proprietary APIs from companies like OpenAI. This opens up ML models to a wide variety of applications and use cases, including anything from academic research to commercial product integrations. This is what is known as the application layer - the user-facing end of the AI technology stack.

The application layer incorporates AI models into tangible products and services. These applications, which can be business-to-business (B2B) or business-to-consumer (B2C), leverage the underlying ML models to provide highly innovative solutions.

Large Language Models (LLMs)

Large language models (LLMs) are some of the fastest-growing AI builds in the world today. There is fierce competition between corporations, governments, and start-ups to gather the necessary resources required to build complex and highly intelligent LLMs, including:

  • Compute
  • Energy
  • Data

Arguably, of these three resources, the most important is compute. This is largely due to the reliance on highly advanced and specialized graphic processing units (GPUs), notably the A100, H100, and upcoming B100 models from NVIDIA, which are absolutely crucial for training LLMs. Following the launch of ChatGPT, there was actually a significant supply shortage, though demand is now beginning to stabilize thanks to improved algorithms, alternative chips, and ramped-up production.

Image1
Source

Now, because of the heavy reliance on GPUs, there is an equally heavy energy demand as GPUs require immense amounts of energy to run. It has been estimated that by 2030, data centers running GPUs could actually account for upwards of 9% of all the energy demand in the United States. Because of this, major companies like Microsoft and Amazon have been exploring alternatives, including nuclear power, to meet this demand and ensure there is adequate energy to continually expand LLM development.

Lastly, LLMs aren’t effective without significant quantities of data. As models like GPT-4 have shown, the quantity and quality of the training dataset are what set AI models apart. GPT-4 was trained on 12 trillion tokens sourced from the internet, including sites like Wikipedia and Reddit. Future models like GPT-5 are expected to require up to 100 trillion tokens, far surpassing the availability of new public data.

LLm size
Source

The scarcity of high-quality training data is creating significant challenges. Many platforms, like Reddit and Twitter, are closing off access to their data or charging high fees while legal battles over copyright violations have intensified. Publications and creators are now suing AI companies for using copyrighted material without permission, while others are forming licensing agreements to monetize their data.

The race for data is not just about training the next generation of AI models, it also has far-reaching consequences for the future of the internet and content ownership. Due to this problem, decentralized solutions like Grass protocol, are offering promising ways of ensuring fair access to data, helping creating new opportunities for AI-driven applications.

Source

Problems with Web2 AI

So far, generative AI systems have succeeded at producing outputs based on patterns learned during training, but the lack of transparency behind the training itself presents a significant challenge. This is what is often referred to as the “black box” issue - where the lack of transparency in the development process makes it difficult for both users and even developers to understand how (and why) the AI system is making certain decisions.

The black box issue creates immense complexities for carrying out audits or raising legal/ethical concerns. In other words, if it is impossible to ascertain how and why outputs are generated, it makes it extremely difficult to understand what the AI system is even doing in the first place. Misusing AI tools, like the generation of biased, incorrect, or copyrighted content, further exacerbates the problem.

Much of this is the result of a high degree of centralization within AI development, as a handful of large American tech companies are leading the charge. This concentration of development power does raise data sovereignty and governance issues, with these major tech corporations having some of the deepest data mines on Earth. This level of utter dominance massively stifles competition, with smaller companies facing potentially significant barriers to entry - especially as the AI arms race creates major scarcity for the limited resources of compute power and energy.

Source

Decentralized AI, utilizing blockchain technology, offers a promising potential solution to the above. For one, it aims to democratize AI development by distributing ownership, governance, and oversight, making AI more transparent and accessible. Open development makes it much easier for our researchers and academics to understand precisely how it's developing and to what extent its intelligence has grown. It also lowers the barrier to entry for independent developers to utilize, creating more competition, increasing innovation, and driving more value creation. This could help balance the influence of major tech companies and promote a more inclusive AI future.

Crypto x AI

While it may not seem like an obvious synergy at first glance, the crossover between AI and cryptocurrency represents one of the most compelling trends in technology. A key facet of the convergence between these two fields is simply the magnitude of the open-source innovation that is happening, particularly evident in models on platforms like Hugging Face.

Hugging Face hosts over 530,000 different models for research and application purposes, making it one of the most comprehensive, collaborative AI libraries on earth. Similar to how GitHub and Discord have become indispensable tools within the crypto ecosystem for code hosting and community management, Hugging Face plays this role in AI collaboration.

Source

What Hugging Face ultimately proves is that AI and open development can and do go hand in hand. As for blockchain and AI specifically, the benefits blockchain offers to AI innovation and development are formulated into two primary categories:

  1. AI-enhancing crypto applications
  2. Crypto-Driven disruptions of traditional AI pipelines

The former refers to potential utilities like improving blockchain analytics and integrating AI model outputs into permissionless protocols.  The latter relates more closely to the benefits of the blockchain ledger and decentralized governance in promoting better open-source development of the typical AI creation pipeline.

Overall, the existing qualities of each technology sector formulate together in a unique way that generates a number of different synergies, as shown below:

Source

Centralized vs. Decentralized AI World

While current AI model development points to tech giants developing centralized, general purpose large-scale AI models, there are several reasons to believe that the future will consist of many smaller, highly specialized models. The “many-models” world holds merit because purely centralized AI development is not necessarily the most economically friendly moving forward.

There are several reasons that medium to small businesses (and potentially even large corporations) would opt for smaller, more specialized models versus centralized large-scale models being developed by firms like Microsoft, Google, and OpenAI:

  1. Centralization/Platform Risk: The problem with a single closed-source provider from large, centralized AI companies is that they could change terms, adjust models, or stop offering services altogether - without warning. 
  2. Privacy: AI is deeply embedded in organizational workflows, and many companies are hesitant to entrust sensitive data to large, centralized models. Specialized models, especially those running locally or on privacy-focused platforms allow companies to maintain control over their data.
  3. Cost-Performance Tradeoff: General-purpose AI models are expensive to train and operate. While they can handle a wide range of tasks, they are often overpowered and overpriced for many use cases. Research indicates that smaller, specialized models can outperform larger general-purpose models in specific domains. As AI reaches scale, businesses will increasingly seek models that offer optimal performance at lower costs for their specific needs.

These three factors point toward a more fragmented, decentralized market being preferred over a highly concentrated, centralized AI market. Should the many-models world come to pass, developers and businesses could utilize open-source models like LLaMA and MistralAI, fine-tuning them with their own proprietary data to create specialized applications.

In short, this points to the future of AI being highly modular, where developers and entrepreneurs compete to deliver value to users through a combination of varying models and services. This modularity will require new infrastructure for tasks like routing, orchestration, payments, and coordination. However, to understand how AI can achieve a high degree of decentralization, several key questions must be addressed:

  • Who can build these models? - The development of AI models traditionally requires significant resources and technical expertise, which can limit accessibility and control to a few well-equipped entities.
  • Who has the data? - Data is the cornerstone of AI. The ownership and control over data sources are crucial in determining who can train effective models.
  • Who has the resources? - Computational power and financial resources are necessary for training sophisticated AI models, often putting these capabilities beyond the reach of individuals or smaller organizations.
  • Who is creating products atop these models? - The applications and services developed using AI models can influence various sectors, but the creators of these products often dictate the technology's direction and implementation.

Current Status of Crypto x AI

In 2024, the crossover between cryptocurrency and AI can be broken down into three primary categories: infrastructure, resources, and problem-solving.

Source: Grayscale Investments. Protocols included are illustrative examples.

The infrastructure layer includes networks that provide platforms for AI development, such as NEAR, TAO, and Fetch.ai. These networks focus on building open, permissionless architectures to support various decentralized services, including AI, and offer incentives to developers through tokens (e.g., NEAR or TAO tokens).

Next, as AI development demands massive amounts of computing power, storage, and data, decentralized projects like Render (RNDR), Akash (AKT), and Livepeer (LPT) provide access to idle GPU resources for demanding use cases like AI. Storage solutions like Filecoin (FIL) and Arweave (AR) offer secure, decentralized alternatives for AI data storage, reducing reliance on centralized systems like AWS, where data may be unsafe. This is crypto’s own AI application layer, with a number of solutions emerging for various pain points in AI development and deployment.

Source

Lastly, there are numerous protocols attempting to solve AI-related problems, including addressing deep fakes and bots. Worldcoin (WLD) offers biometric-based solutions to verify human identity, theoretically combating bots. Origin Trail (TRAC), Numbers Protocol (NUM), and Story Protocol focus on verifying content to help prevent deep fake-driven misinformation. Creating solutions to these issues is highly prevalent as there are increasing concerns over the potential misuse and even weaponization of AI, helping to maintain transparency and security over AI-generated content.

How Crypto Helps AI

Data Integrity

The integrity of the data AI systems are trained on is crucial for producing reliable outcomes. Blockchain’s tamper-resistant ledger ensures data authenticity and immutability, providing AI models with trustworthy information. Additionally, blockchain can verify data ownership through digital signatures and timestamped entries, ensuring that AI only accesses data from legitimate sources. This level of transparency is particularly valuable in fields like intellectual property and research, helping to prevent data theft, fraud, and ownership disputes.

Decentralized Storage

Decentralized storage is a blockchain-based approach that involves storing data on a network of nodes instead of a centralized server. Decentralized storage offers several advantages over centralized storage. First, it provides greater data sovereignty, as users can maintain control over their own data. Second, it's more resilient to cyberattacks and outages, as the data is distributed across a network of nodes. Finally, it can be more cost-effective, as users only pay for their storage space.

Decentralized storage solutions offer worthy alternatives to traditional, centralized cloud services, particularly AWS and Google, and give AI startups and developers the opportunity to diversify away from these major tech companies - leading to potentially significant cost savings and improved data privacy and authenticity protections.

In relative terms, decentralized storage networks are generally cheaper than their centralized counterparts. Decentralized storage networks, such as Storj, range from 70% to 99% cheaper than Amazon S3. Sia and Filecoin achieve lower costs by leveraging an open marketplace model where the price of storage is a function of supply and demand cycles. Storj, however, uses fixed pricing for storage use on its network. Arweave, which guarantees perpetual storage, prices its storage with a ~200-year time horizon, making its upfront costs higher than those of its decentralized competitors.

Additionally, decentralized storage does provide improved protection against data piracy, offering improved privacy protections and ensuring that large centralized entities cannot misuse data. Secondary to this is data stored on decentralized servers also generally benefits from being verifiable and authenticated, a direct benefit of the transparency of blockchain technology. Mechanisms like decentralized identifiers and blockchain-based data hashes offer potential solutions for affirming the authenticity and origin of digital content - something especially important as the lines between human and AI content get continually blurred.

However, these innovations are not without their own challenges. Despite potential cost and privacy advantages, regulatory, technical, and operational factors complicate the transition from centralized cloud storage to decentralized platforms. It is largely still a young and immature subsector of blockchain, something that AI developers and emerging companies will have to weigh against the centralization risks when choosing how to store their sensitive data.

Decentralized Access to AI Compute, Machine Learning

The costs of training and developing AI models have skyrocketed, with only a few industry giants able to afford participation. OpenAI, for instance, is projected to spend over $3 billion on AI training costs in 2024, with estimates rising to $10 billion by 2025. This steep rise in costs has led to growing concerns about the increasing dominance of a few large players, effectively shutting out smaller competitors from accessing the resources necessary to innovate in the AI space.

Source

In response to this issue, decentralized compute projects have emerged, offering promising solutions by leveraging unused global computing power. These platforms aim to create a more accessible and affordable alternative to centralized AI systems. Decentralized computing offers cost advantages, promotes resilience against censorship, and fosters equitable access to AI resources, ultimately encouraging greater innovation.

A significant challenge, however, lies in the bandwidth required for large-scale AI training. Traditional AI clusters, housed within centralized data centers and composed of uniform GPUs, rely on extremely fast communication speeds—up to 800 GB/s—necessary for efficient training. In comparison, decentralized consumer-level networks, such as those using residential internet bandwidth, cannot support the same high-level communication demands, making decentralized training infeasible for large models.

Yet, blockchain technology has proven that decentralized coordination of computational power is possible, as demonstrated by networks like Bitcoin, which surpass the computational power of even the largest AI training clusters. Despite this, the critical challenge remains: overcoming the bandwidth bottleneck required for real-time communication between GPUs, a key component of AI training.

Innovative approaches in decentralized compute are beginning to address these issues. Projects are exploring new machine learning training techniques that reduce the need for frequent communication between nodes, improving the efficiency of distributed models. These advancements, alongside fault-tolerant training methods and cryptographic incentives, are paving the way for decentralized AI training at scale.

Examples of platforms leading this effort include:

  • Bittensor: Bittensor’s "Proof of Intelligence" mechanism incentivizes collaborative AI model training and inference, rewarding participants who contribute to the development of the network, ultimately creating a decentralized AI platform.
  • Gensyn: Gensyn focuses on decentralized machine learning by verifying the accuracy of external computations, ensuring that AI models can be trained across a distributed network.
  • Akash Network: A decentralized marketplace for cloud computing resources. Using a proof-of-stake model, Akash connects users with compute providers through a reverse auction, offering competitive pricing and broad accessibility.

Technologies for Decentralized Compute

Zero-Knowledge Machine Learning (zkML)

An area of highly innovative development is that of zero-knowledge machine learning (zkML). zkML stems from the principles of zero-knowledge proofs, a cryptographic method that lets anyone prove the validity of a statement without revealing its underlying information. zkML applies the methodology of zk-proofs to verify the inputs and outputs of AI models without exposing the model’s proprietary aspects, like its weights or underlying algorithms.

This provides some great enhancements to ML models, including boosting the transparency and accountability of AI models, addressing privacy concerns, and improving user trust. For example, zkML can hide a user’s query from the model owner, offering a privacy-preserving alternative to the current practice of most ML models logging every transaction.

Fully Homomorphic Encryption (FHE)

Fully Homomorphic Encryption (FHE) plays a critical role in safeguarding data privacy and security, especially within decentralized AI systems. FHE allows computations to be performed directly on encrypted data, ensuring the information stays secure even during processing. This is highly advantageous for cloud computing, where data vulnerability is a major concern. FHE prevents exposure of sensitive information, reducing the risks of data breaches and enhancing trust in AI systems, especially in regulated industries that require stringent data protection.

How AI Helps Crypto

AI Agents

AI agents are advanced software systems capable of automating tasks and decision-making processes within blockchain ecosystems. They are powered by LLMs and reinforcement learning (RL) techniques, and they can interact autonomously with decentralized applications (dApps) by executing tasks without the need for human intervention.

Blockchain’s decentralized infrastructure provides the perfect environment for AI agents, where smart contracts enable the autonomous execution of transactions. Currently, most AI agents are relatively rudimentary, often programmed to handle very specific tasks like interacting with financial protocols. However, as technology advances, agents will become more capable of handling various functions, helping streamline user interactions with blockchain networks.

Use In DeFi

One of the most promising areas where AI is making an impact is DeFi. AI agents can manage DeFi protocols, taking on tasks such as liquidity provisioning, liquidations, and even portfolio rebalancing without manual input.

Some more specific examples include:

  • User Intent Matching: AI agents help users translate their investment goals into actionable on-chain strategies, from simple transactions to complex portfolio management.
  • Action Planning and Routing: With blockchain infrastructure constantly evolving, AI agents can optimize transaction routing, factoring in variables like speed, security, and cost. For instance, AI agents can determine the most efficient Layer 2 solutions for transaction execution.
  • Shared Funds and Asset Pools: AI agents can help manage collective investments, such as community-governed funds, leveraging blockchain’s transparency and shared ownership models.

Scaling Shared Ownership and Governance with AI

As AI agents take on more responsibilities in blockchain ecosystems, issues of shared ownership and governance are more or less inevitable side effects. Blockchain provides a range of governance models, from minimal systems that resemble Bitcoin’s lean protocol to complex frameworks that utilize DAOs (Decentralized Autonomous Organizations).

In systems with shared ownership, AI agents can ensure the equitable distribution of value generated within a community, aligning the agents' goals with those of the broader network. Blockchain governance structures, particularly those based on DAOs, will play a crucial role in managing AI agents in a fair and transparent manner.

Blockchain Security

AI agents contribute to blockchain security by automating the detection and response to security threats. For instance, they can monitor transactions in real time, reducing the risk of human error, which is a common cause of many security breaches. However, the increasing use of AI agents also raises concerns about user control and privacy, particularly in terms of how much autonomy should be given to these agents.

Potential Issues with Crypto x AI

Decentralization and Its Challenges

Due to the strenuous computational demands, efficiently training LLMs requires specific, tightly connected hardware clusters. This presents logistical hurdles for decentralized compute, as models that can accomplish this are impractical for most home setups. So, most users turn to cloud services, which brings about problems of data privacy and potential monopolization by centralized providers like AWS.

This is not to say that there isn’t potential within distributed computing for model training. One such example is the expansion of Differential Data Flow, which is attempting to scale across decentralized networks. However, the current landscape still heavily favors localized hardware clusters, and even advanced retail GPUs cannot keep up. At the moment, despite the advancement of research, no Web3 projects efficiently match the efficiency of already existing methods. So, despite the hype around how AI and crypto appear to intersect in many ways, tangible real world adoption is quite limited.

Is It Too Early?

AI is often seen as a centralizing force, and while the idea of using crypto to counterbalance that with decentralization sounds appealing on paper, it is still really just theoretical. This integration is still very much in its infancy, mainly due to costs and technical challenges - especially with on-chain model training (which is notoriously difficult). Right now, most crypto projects with an AI application still operate almost exclusively off-chain, which misses the point of utilizing the blockchain in the first place. Technologies like ZKML, Trusted Execution Environments (TEEs), and cryptoeconomic models show promise but have significant trade-offs that limit their immediate practical value.

Even so, there's a persistent vision of a decentralized, multi-model world where specialized AI models serve distinct, nuanced purposes. However, getting decentralized models off the ground is tough, especially when it comes to gathering proprietary datasets, which is an area dominated by centralized giants like OpenAI and Google. Networks like Helium demonstrate that it's possible to redistribute infrastructure costs in a decentralized way, but AI still faces huge barriers to competing with centralized, generalized models.

Right now, the crypto-AI landscape has a lot of limitations. Decentralized computing marketplaces sound like a great idea, but they struggle with latency and communication issues that make them impractical for large-scale AI training. Initiatives like Hivemapper and Grass AI show potential in data collection but face quality control and integration challenges. At the moment, crypto’s most effective role in AI is as a payment and value-transfer mechanism rather than in decentralized model training, which is still too expensive and inefficient compared to its benefits.

Timing is also a challenge. The Web3 space is often preoccupied with chasing short-term gains fueled by hype. Developers tend to follow investor trends, prioritizing quick profits over long-term innovation. This pattern is familiar in the crypto industry, where market narratives drive decisions, and participants get stuck chasing liquidity instead of focusing on substantial opportunities—especially in AI, which is evolving rapidly on its own.

So, while there is a growing belief that combining crypto and AI will lead to rapid advancements, it's important to stay skeptical. In the near term, crypto’s most realistic applications will probably be in specific areas, like enabling autonomous agents or prediction markets, rather than creating a fully decentralized AI ecosystem.

Is It Too Late?

The AI race is fundamentally about computing power. The more compute you have for both training and inference, the better the results. While optimizations and new architectures aim to address the limitations of traditional processing units, the core challenge remains the speed and scale at which matrix multiplications can be performed over vast amounts of data.

This explains the massive investments by "Hyperscalers" like OpenAI with Microsoft, Anthropic with AWS, and Google and Meta, both increasingly building out their own data centers. These companies are creating synergies between AI models and the hardware that powers them, investing billions to prepare for AI’s growing impact on the global economy. NVIDIA’s CEO, Jensen Huang, estimates AI acceleration investments will reach $2 trillion in the coming years, driven largely by interest from sovereign entities. Analysts predict AI-related data center spending will hit $160 billion in 2024 and over $200 billion in 2025.

In contrast, Web3 and DePIn projects are far behind in their capacity to incentivize AI hardware investments. The current market capitalization of DePIn projects is around $40 billion, tied to illiquid and speculative tokens. Even if these tokens were to double in value over the next few years, reaching $80 billion, this pales compared to the cash spent by Hyperscalers. Web3's decentralized GPU networks and AI infrastructure projects would need billions more in investor demand to fund capital expenditures and operational costs, as well as to drive token value higher to incentivize further growth.

The reality is that much of the so-called "decentralized infrastructure" in Web3 is likely running on the very same Hyperscalers' cloud services it aims to compete against. Furthermore, the increasing demand for GPUs and specialized AI hardware will eventually increase supply, potentially lowering the cost of both cloud services and hardware acquisition. For now, however, the decentralized sector remains far from being able to challenge Hyperscalers' dominance in the AI compute space.

Conclusion

Considering everything, the potential crossover between AI and cryptocurrency stands out as one of the most exciting technological advancements of the 21st century and one of humanity’s best bets for avoiding a highly centralized AI future run by just a small handful of companies. This hybrid approach offers transformative possibilities by merging two complex and disruptive fields. As of 2024, while meaningful progress has been made, significant challenges remain. Decentralized AI has the power to revolutionize industries by providing more secure, transparent, and cost-effective alternatives to centralized systems. However, this vision faces considerable hurdles, such as the high costs of training AI models, the technical limitations of decentralized compute, and the overwhelming dominance of centralized giants in both AI and compute resources.

Yet, despite these challenges, the promise of decentralization continues to fuel innovation. Blockchain technology provides a unique avenue for democratizing AI development, expanding access to a broader range of developers and users. The future success of this movement will depend on decentralized platforms' ability to overcome current inefficiencies and offer viable, scalable alternatives to the hyper scalers that currently dominate the AI landscape.

Disclaimer: This research report is exactly that — a research report. It is not intended to serve as financial advice, nor should you blindly assume that any of the information is accurate without confirming through your own research. Bitcoin, cryptocurrencies, and other digital assets are incredibly risky and nothing in this report should be considered an endorsement to buy or sell any asset. Never invest more than you are willing to lose and understand the risk that you are taking. Do your own research. All information in this report is for educational purposes only and should not be the basis for any investment decisions that you make.

Both crypto and AI have begun rapidly disrupting multiple industries worldwide in their own right—crypto primarily through the introduction of decentralized financial systems and AI through task and decision-making automation. Interestingly, it is the intersection of these two fields that holds tremendous untapped potential. The promise of decentralized AI systems, driven by blockchain technology, could unlock new levels of transparency, security, and accessibility, transforming how we build and use AI. However, this path has come with its own challenges. The intersection of these two technologies is riddled with technical and economic obstacles that have made achieving competitive and efficient decentralized compute difficult.

This report explores the current state of AI and crypto, highlighting the key developments, technologies, challenges, and potential breakthroughs that will define this hybrid space in 2024 and beyond.

What is Artificial Intelligence (AI)?

The field of Artificial Intelligence (AI) is a rapidly evolving technology sector that aims to replicate human cognitive functions within machines, enabling them to learn from experience, adapt to new and changing situations, and execute tasks that normally would require human intelligence to complete accurately and efficiently. AI leverages complex algorithms and advanced data analysis to construct AI systems, allowing such systems to automate tasks, and some even actively improve their own performance. The latter is evident in an important core subset of AI called machine learning, which focuses on developing algorithms that allow computers to recognize patterns and make their own decisions based on data. 

AI models are trained to identify trends and predict outcomes via three components:

  • Training Data
  • Model Architecture
  • Model Parameters

The training data refers to any foundational information made available to the AI algorithm, providing it with examples from which the algorithm can learn. The model architecture defines the overall structure and layers of the model itself, while the parameters are fine-tuned bounds from which the algorithm can operate, optimizing accuracy over time.

AI algorithms need very large, processed datasets with specified training models in order to generate general solutions for specific use cases. After deployment, these models can be used for many types of practical applications. For example, within the cryptocurrency sector, AI has been leveraged for enhanced security, automated processes, and even improving user experiences by integrating data-driven insights and automated decision-making.

An illustrative example of how data is used in AI training. Source

Machine Learning

Machine Learning (ML) models are computational frameworks that execute tasks without being directly programmed to do so. ML models have streamlined operations through a systematic pipeline that encompasses data collection, training, and inference, allowing an ML model to not only perform a task but actively learn and improve at it.

Overall, there are three established methodologies for building ML models: supervised, unsupervised, and reinforcement learning - each of which comes with its own unique approach to data interpretation and educational strategy. 

  1. Supervised Learning: This strategy allows the developers to take the role of the “teacher,” in which they provide examples and guide an ML model during the learning process. For example, an ML model designed to identify dogs from images would receive a dataset labeled with pictures of dogs. The model’s objective would be to learn the distinguishing features of dogs to classify them, with direct guidance on the part of the developers being the deliberate choice of the labeled dataset.
  2. Unsupervised Learning: Unlike supervised learning, unsupervised learning does not rely on labeled datasets. Models, such as Large Language Models (LLMs) like GPT-4 and LLaMa, learn to identify patterns and relationships within the data autonomously. This form of learning is pivotal for understanding complex datasets without predefined categories or labels.
  3. Reinforcement Learning: This strategy includes a trial-and-error learning process approach, where the model learns to make decisions based on the feedback it receives from its environment. Reinforcement learning is most effective in sequential decision-making tasks, such as robotic control or strategic games like chess, where the model improves its performance based on the direct outcomes of its actions.

ML models are built and distributed through both open-source platforms (like Hugging Face’s model hub) and through proprietary APIs from companies like OpenAI. This opens up ML models to a wide variety of applications and use cases, including anything from academic research to commercial product integrations. This is what is known as the application layer - the user-facing end of the AI technology stack.

The application layer incorporates AI models into tangible products and services. These applications, which can be business-to-business (B2B) or business-to-consumer (B2C), leverage the underlying ML models to provide highly innovative solutions.

Large Language Models (LLMs)

Large language models (LLMs) are some of the fastest-growing AI builds in the world today. There is fierce competition between corporations, governments, and start-ups to gather the necessary resources required to build complex and highly intelligent LLMs, including:

  • Compute
  • Energy
  • Data

Arguably, of these three resources, the most important is compute. This is largely due to the reliance on highly advanced and specialized graphic processing units (GPUs), notably the A100, H100, and upcoming B100 models from NVIDIA, which are absolutely crucial for training LLMs. Following the launch of ChatGPT, there was actually a significant supply shortage, though demand is now beginning to stabilize thanks to improved algorithms, alternative chips, and ramped-up production.

Image1
Source

Now, because of the heavy reliance on GPUs, there is an equally heavy energy demand as GPUs require immense amounts of energy to run. It has been estimated that by 2030, data centers running GPUs could actually account for upwards of 9% of all the energy demand in the United States. Because of this, major companies like Microsoft and Amazon have been exploring alternatives, including nuclear power, to meet this demand and ensure there is adequate energy to continually expand LLM development.

Lastly, LLMs aren’t effective without significant quantities of data. As models like GPT-4 have shown, the quantity and quality of the training dataset are what set AI models apart. GPT-4 was trained on 12 trillion tokens sourced from the internet, including sites like Wikipedia and Reddit. Future models like GPT-5 are expected to require up to 100 trillion tokens, far surpassing the availability of new public data.

LLm size
Source

The scarcity of high-quality training data is creating significant challenges. Many platforms, like Reddit and Twitter, are closing off access to their data or charging high fees while legal battles over copyright violations have intensified. Publications and creators are now suing AI companies for using copyrighted material without permission, while others are forming licensing agreements to monetize their data.

The race for data is not just about training the next generation of AI models, it also has far-reaching consequences for the future of the internet and content ownership. Due to this problem, decentralized solutions like Grass protocol, are offering promising ways of ensuring fair access to data, helping creating new opportunities for AI-driven applications.

Source

Problems with Web2 AI

So far, generative AI systems have succeeded at producing outputs based on patterns learned during training, but the lack of transparency behind the training itself presents a significant challenge. This is what is often referred to as the “black box” issue - where the lack of transparency in the development process makes it difficult for both users and even developers to understand how (and why) the AI system is making certain decisions.

The black box issue creates immense complexities for carrying out audits or raising legal/ethical concerns. In other words, if it is impossible to ascertain how and why outputs are generated, it makes it extremely difficult to understand what the AI system is even doing in the first place. Misusing AI tools, like the generation of biased, incorrect, or copyrighted content, further exacerbates the problem.

Much of this is the result of a high degree of centralization within AI development, as a handful of large American tech companies are leading the charge. This concentration of development power does raise data sovereignty and governance issues, with these major tech corporations having some of the deepest data mines on Earth. This level of utter dominance massively stifles competition, with smaller companies facing potentially significant barriers to entry - especially as the AI arms race creates major scarcity for the limited resources of compute power and energy.

Source

Decentralized AI, utilizing blockchain technology, offers a promising potential solution to the above. For one, it aims to democratize AI development by distributing ownership, governance, and oversight, making AI more transparent and accessible. Open development makes it much easier for our researchers and academics to understand precisely how it's developing and to what extent its intelligence has grown. It also lowers the barrier to entry for independent developers to utilize, creating more competition, increasing innovation, and driving more value creation. This could help balance the influence of major tech companies and promote a more inclusive AI future.

Crypto x AI

While it may not seem like an obvious synergy at first glance, the crossover between AI and cryptocurrency represents one of the most compelling trends in technology. A key facet of the convergence between these two fields is simply the magnitude of the open-source innovation that is happening, particularly evident in models on platforms like Hugging Face.

Hugging Face hosts over 530,000 different models for research and application purposes, making it one of the most comprehensive, collaborative AI libraries on earth. Similar to how GitHub and Discord have become indispensable tools within the crypto ecosystem for code hosting and community management, Hugging Face plays this role in AI collaboration.

Source

What Hugging Face ultimately proves is that AI and open development can and do go hand in hand. As for blockchain and AI specifically, the benefits blockchain offers to AI innovation and development are formulated into two primary categories:

  1. AI-enhancing crypto applications
  2. Crypto-Driven disruptions of traditional AI pipelines

The former refers to potential utilities like improving blockchain analytics and integrating AI model outputs into permissionless protocols.  The latter relates more closely to the benefits of the blockchain ledger and decentralized governance in promoting better open-source development of the typical AI creation pipeline.

Overall, the existing qualities of each technology sector formulate together in a unique way that generates a number of different synergies, as shown below:

Source

Centralized vs. Decentralized AI World

While current AI model development points to tech giants developing centralized, general purpose large-scale AI models, there are several reasons to believe that the future will consist of many smaller, highly specialized models. The “many-models” world holds merit because purely centralized AI development is not necessarily the most economically friendly moving forward.

There are several reasons that medium to small businesses (and potentially even large corporations) would opt for smaller, more specialized models versus centralized large-scale models being developed by firms like Microsoft, Google, and OpenAI:

  1. Centralization/Platform Risk: The problem with a single closed-source provider from large, centralized AI companies is that they could change terms, adjust models, or stop offering services altogether - without warning. 
  2. Privacy: AI is deeply embedded in organizational workflows, and many companies are hesitant to entrust sensitive data to large, centralized models. Specialized models, especially those running locally or on privacy-focused platforms allow companies to maintain control over their data.
  3. Cost-Performance Tradeoff: General-purpose AI models are expensive to train and operate. While they can handle a wide range of tasks, they are often overpowered and overpriced for many use cases. Research indicates that smaller, specialized models can outperform larger general-purpose models in specific domains. As AI reaches scale, businesses will increasingly seek models that offer optimal performance at lower costs for their specific needs.

These three factors point toward a more fragmented, decentralized market being preferred over a highly concentrated, centralized AI market. Should the many-models world come to pass, developers and businesses could utilize open-source models like LLaMA and MistralAI, fine-tuning them with their own proprietary data to create specialized applications.

In short, this points to the future of AI being highly modular, where developers and entrepreneurs compete to deliver value to users through a combination of varying models and services. This modularity will require new infrastructure for tasks like routing, orchestration, payments, and coordination. However, to understand how AI can achieve a high degree of decentralization, several key questions must be addressed:

  • Who can build these models? - The development of AI models traditionally requires significant resources and technical expertise, which can limit accessibility and control to a few well-equipped entities.
  • Who has the data? - Data is the cornerstone of AI. The ownership and control over data sources are crucial in determining who can train effective models.
  • Who has the resources? - Computational power and financial resources are necessary for training sophisticated AI models, often putting these capabilities beyond the reach of individuals or smaller organizations.
  • Who is creating products atop these models? - The applications and services developed using AI models can influence various sectors, but the creators of these products often dictate the technology's direction and implementation.

Current Status of Crypto x AI

In 2024, the crossover between cryptocurrency and AI can be broken down into three primary categories: infrastructure, resources, and problem-solving.

Source: Grayscale Investments. Protocols included are illustrative examples.

The infrastructure layer includes networks that provide platforms for AI development, such as NEAR, TAO, and Fetch.ai. These networks focus on building open, permissionless architectures to support various decentralized services, including AI, and offer incentives to developers through tokens (e.g., NEAR or TAO tokens).

Next, as AI development demands massive amounts of computing power, storage, and data, decentralized projects like Render (RNDR), Akash (AKT), and Livepeer (LPT) provide access to idle GPU resources for demanding use cases like AI. Storage solutions like Filecoin (FIL) and Arweave (AR) offer secure, decentralized alternatives for AI data storage, reducing reliance on centralized systems like AWS, where data may be unsafe. This is crypto’s own AI application layer, with a number of solutions emerging for various pain points in AI development and deployment.

Source

Lastly, there are numerous protocols attempting to solve AI-related problems, including addressing deep fakes and bots. Worldcoin (WLD) offers biometric-based solutions to verify human identity, theoretically combating bots. Origin Trail (TRAC), Numbers Protocol (NUM), and Story Protocol focus on verifying content to help prevent deep fake-driven misinformation. Creating solutions to these issues is highly prevalent as there are increasing concerns over the potential misuse and even weaponization of AI, helping to maintain transparency and security over AI-generated content.

How Crypto Helps AI

Data Integrity

The integrity of the data AI systems are trained on is crucial for producing reliable outcomes. Blockchain’s tamper-resistant ledger ensures data authenticity and immutability, providing AI models with trustworthy information. Additionally, blockchain can verify data ownership through digital signatures and timestamped entries, ensuring that AI only accesses data from legitimate sources. This level of transparency is particularly valuable in fields like intellectual property and research, helping to prevent data theft, fraud, and ownership disputes.

Decentralized Storage

Decentralized storage is a blockchain-based approach that involves storing data on a network of nodes instead of a centralized server. Decentralized storage offers several advantages over centralized storage. First, it provides greater data sovereignty, as users can maintain control over their own data. Second, it's more resilient to cyberattacks and outages, as the data is distributed across a network of nodes. Finally, it can be more cost-effective, as users only pay for their storage space.

Decentralized storage solutions offer worthy alternatives to traditional, centralized cloud services, particularly AWS and Google, and give AI startups and developers the opportunity to diversify away from these major tech companies - leading to potentially significant cost savings and improved data privacy and authenticity protections.

In relative terms, decentralized storage networks are generally cheaper than their centralized counterparts. Decentralized storage networks, such as Storj, range from 70% to 99% cheaper than Amazon S3. Sia and Filecoin achieve lower costs by leveraging an open marketplace model where the price of storage is a function of supply and demand cycles. Storj, however, uses fixed pricing for storage use on its network. Arweave, which guarantees perpetual storage, prices its storage with a ~200-year time horizon, making its upfront costs higher than those of its decentralized competitors.

Additionally, decentralized storage does provide improved protection against data piracy, offering improved privacy protections and ensuring that large centralized entities cannot misuse data. Secondary to this is data stored on decentralized servers also generally benefits from being verifiable and authenticated, a direct benefit of the transparency of blockchain technology. Mechanisms like decentralized identifiers and blockchain-based data hashes offer potential solutions for affirming the authenticity and origin of digital content - something especially important as the lines between human and AI content get continually blurred.

However, these innovations are not without their own challenges. Despite potential cost and privacy advantages, regulatory, technical, and operational factors complicate the transition from centralized cloud storage to decentralized platforms. It is largely still a young and immature subsector of blockchain, something that AI developers and emerging companies will have to weigh against the centralization risks when choosing how to store their sensitive data.

Decentralized Access to AI Compute, Machine Learning

The costs of training and developing AI models have skyrocketed, with only a few industry giants able to afford participation. OpenAI, for instance, is projected to spend over $3 billion on AI training costs in 2024, with estimates rising to $10 billion by 2025. This steep rise in costs has led to growing concerns about the increasing dominance of a few large players, effectively shutting out smaller competitors from accessing the resources necessary to innovate in the AI space.

Source

In response to this issue, decentralized compute projects have emerged, offering promising solutions by leveraging unused global computing power. These platforms aim to create a more accessible and affordable alternative to centralized AI systems. Decentralized computing offers cost advantages, promotes resilience against censorship, and fosters equitable access to AI resources, ultimately encouraging greater innovation.

A significant challenge, however, lies in the bandwidth required for large-scale AI training. Traditional AI clusters, housed within centralized data centers and composed of uniform GPUs, rely on extremely fast communication speeds—up to 800 GB/s—necessary for efficient training. In comparison, decentralized consumer-level networks, such as those using residential internet bandwidth, cannot support the same high-level communication demands, making decentralized training infeasible for large models.

Yet, blockchain technology has proven that decentralized coordination of computational power is possible, as demonstrated by networks like Bitcoin, which surpass the computational power of even the largest AI training clusters. Despite this, the critical challenge remains: overcoming the bandwidth bottleneck required for real-time communication between GPUs, a key component of AI training.

Innovative approaches in decentralized compute are beginning to address these issues. Projects are exploring new machine learning training techniques that reduce the need for frequent communication between nodes, improving the efficiency of distributed models. These advancements, alongside fault-tolerant training methods and cryptographic incentives, are paving the way for decentralized AI training at scale.

Examples of platforms leading this effort include:

  • Bittensor: Bittensor’s "Proof of Intelligence" mechanism incentivizes collaborative AI model training and inference, rewarding participants who contribute to the development of the network, ultimately creating a decentralized AI platform.
  • Gensyn: Gensyn focuses on decentralized machine learning by verifying the accuracy of external computations, ensuring that AI models can be trained across a distributed network.
  • Akash Network: A decentralized marketplace for cloud computing resources. Using a proof-of-stake model, Akash connects users with compute providers through a reverse auction, offering competitive pricing and broad accessibility.

Technologies for Decentralized Compute

Zero-Knowledge Machine Learning (zkML)

An area of highly innovative development is that of zero-knowledge machine learning (zkML). zkML stems from the principles of zero-knowledge proofs, a cryptographic method that lets anyone prove the validity of a statement without revealing its underlying information. zkML applies the methodology of zk-proofs to verify the inputs and outputs of AI models without exposing the model’s proprietary aspects, like its weights or underlying algorithms.

This provides some great enhancements to ML models, including boosting the transparency and accountability of AI models, addressing privacy concerns, and improving user trust. For example, zkML can hide a user’s query from the model owner, offering a privacy-preserving alternative to the current practice of most ML models logging every transaction.

Fully Homomorphic Encryption (FHE)

Fully Homomorphic Encryption (FHE) plays a critical role in safeguarding data privacy and security, especially within decentralized AI systems. FHE allows computations to be performed directly on encrypted data, ensuring the information stays secure even during processing. This is highly advantageous for cloud computing, where data vulnerability is a major concern. FHE prevents exposure of sensitive information, reducing the risks of data breaches and enhancing trust in AI systems, especially in regulated industries that require stringent data protection.

How AI Helps Crypto

AI Agents

AI agents are advanced software systems capable of automating tasks and decision-making processes within blockchain ecosystems. They are powered by LLMs and reinforcement learning (RL) techniques, and they can interact autonomously with decentralized applications (dApps) by executing tasks without the need for human intervention.

Blockchain’s decentralized infrastructure provides the perfect environment for AI agents, where smart contracts enable the autonomous execution of transactions. Currently, most AI agents are relatively rudimentary, often programmed to handle very specific tasks like interacting with financial protocols. However, as technology advances, agents will become more capable of handling various functions, helping streamline user interactions with blockchain networks.

Use In DeFi

One of the most promising areas where AI is making an impact is DeFi. AI agents can manage DeFi protocols, taking on tasks such as liquidity provisioning, liquidations, and even portfolio rebalancing without manual input.

Some more specific examples include:

  • User Intent Matching: AI agents help users translate their investment goals into actionable on-chain strategies, from simple transactions to complex portfolio management.
  • Action Planning and Routing: With blockchain infrastructure constantly evolving, AI agents can optimize transaction routing, factoring in variables like speed, security, and cost. For instance, AI agents can determine the most efficient Layer 2 solutions for transaction execution.
  • Shared Funds and Asset Pools: AI agents can help manage collective investments, such as community-governed funds, leveraging blockchain’s transparency and shared ownership models.

Scaling Shared Ownership and Governance with AI

As AI agents take on more responsibilities in blockchain ecosystems, issues of shared ownership and governance are more or less inevitable side effects. Blockchain provides a range of governance models, from minimal systems that resemble Bitcoin’s lean protocol to complex frameworks that utilize DAOs (Decentralized Autonomous Organizations).

In systems with shared ownership, AI agents can ensure the equitable distribution of value generated within a community, aligning the agents' goals with those of the broader network. Blockchain governance structures, particularly those based on DAOs, will play a crucial role in managing AI agents in a fair and transparent manner.

Blockchain Security

AI agents contribute to blockchain security by automating the detection and response to security threats. For instance, they can monitor transactions in real time, reducing the risk of human error, which is a common cause of many security breaches. However, the increasing use of AI agents also raises concerns about user control and privacy, particularly in terms of how much autonomy should be given to these agents.

Potential Issues with Crypto x AI

Decentralization and Its Challenges

Due to the strenuous computational demands, efficiently training LLMs requires specific, tightly connected hardware clusters. This presents logistical hurdles for decentralized compute, as models that can accomplish this are impractical for most home setups. So, most users turn to cloud services, which brings about problems of data privacy and potential monopolization by centralized providers like AWS.

This is not to say that there isn’t potential within distributed computing for model training. One such example is the expansion of Differential Data Flow, which is attempting to scale across decentralized networks. However, the current landscape still heavily favors localized hardware clusters, and even advanced retail GPUs cannot keep up. At the moment, despite the advancement of research, no Web3 projects efficiently match the efficiency of already existing methods. So, despite the hype around how AI and crypto appear to intersect in many ways, tangible real world adoption is quite limited.

Is It Too Early?

AI is often seen as a centralizing force, and while the idea of using crypto to counterbalance that with decentralization sounds appealing on paper, it is still really just theoretical. This integration is still very much in its infancy, mainly due to costs and technical challenges - especially with on-chain model training (which is notoriously difficult). Right now, most crypto projects with an AI application still operate almost exclusively off-chain, which misses the point of utilizing the blockchain in the first place. Technologies like ZKML, Trusted Execution Environments (TEEs), and cryptoeconomic models show promise but have significant trade-offs that limit their immediate practical value.

Even so, there's a persistent vision of a decentralized, multi-model world where specialized AI models serve distinct, nuanced purposes. However, getting decentralized models off the ground is tough, especially when it comes to gathering proprietary datasets, which is an area dominated by centralized giants like OpenAI and Google. Networks like Helium demonstrate that it's possible to redistribute infrastructure costs in a decentralized way, but AI still faces huge barriers to competing with centralized, generalized models.

Right now, the crypto-AI landscape has a lot of limitations. Decentralized computing marketplaces sound like a great idea, but they struggle with latency and communication issues that make them impractical for large-scale AI training. Initiatives like Hivemapper and Grass AI show potential in data collection but face quality control and integration challenges. At the moment, crypto’s most effective role in AI is as a payment and value-transfer mechanism rather than in decentralized model training, which is still too expensive and inefficient compared to its benefits.

Timing is also a challenge. The Web3 space is often preoccupied with chasing short-term gains fueled by hype. Developers tend to follow investor trends, prioritizing quick profits over long-term innovation. This pattern is familiar in the crypto industry, where market narratives drive decisions, and participants get stuck chasing liquidity instead of focusing on substantial opportunities—especially in AI, which is evolving rapidly on its own.

So, while there is a growing belief that combining crypto and AI will lead to rapid advancements, it's important to stay skeptical. In the near term, crypto’s most realistic applications will probably be in specific areas, like enabling autonomous agents or prediction markets, rather than creating a fully decentralized AI ecosystem.

Is It Too Late?

The AI race is fundamentally about computing power. The more compute you have for both training and inference, the better the results. While optimizations and new architectures aim to address the limitations of traditional processing units, the core challenge remains the speed and scale at which matrix multiplications can be performed over vast amounts of data.

This explains the massive investments by "Hyperscalers" like OpenAI with Microsoft, Anthropic with AWS, and Google and Meta, both increasingly building out their own data centers. These companies are creating synergies between AI models and the hardware that powers them, investing billions to prepare for AI’s growing impact on the global economy. NVIDIA’s CEO, Jensen Huang, estimates AI acceleration investments will reach $2 trillion in the coming years, driven largely by interest from sovereign entities. Analysts predict AI-related data center spending will hit $160 billion in 2024 and over $200 billion in 2025.

In contrast, Web3 and DePIn projects are far behind in their capacity to incentivize AI hardware investments. The current market capitalization of DePIn projects is around $40 billion, tied to illiquid and speculative tokens. Even if these tokens were to double in value over the next few years, reaching $80 billion, this pales compared to the cash spent by Hyperscalers. Web3's decentralized GPU networks and AI infrastructure projects would need billions more in investor demand to fund capital expenditures and operational costs, as well as to drive token value higher to incentivize further growth.

The reality is that much of the so-called "decentralized infrastructure" in Web3 is likely running on the very same Hyperscalers' cloud services it aims to compete against. Furthermore, the increasing demand for GPUs and specialized AI hardware will eventually increase supply, potentially lowering the cost of both cloud services and hardware acquisition. For now, however, the decentralized sector remains far from being able to challenge Hyperscalers' dominance in the AI compute space.

Conclusion

Considering everything, the potential crossover between AI and cryptocurrency stands out as one of the most exciting technological advancements of the 21st century and one of humanity’s best bets for avoiding a highly centralized AI future run by just a small handful of companies. This hybrid approach offers transformative possibilities by merging two complex and disruptive fields. As of 2024, while meaningful progress has been made, significant challenges remain. Decentralized AI has the power to revolutionize industries by providing more secure, transparent, and cost-effective alternatives to centralized systems. However, this vision faces considerable hurdles, such as the high costs of training AI models, the technical limitations of decentralized compute, and the overwhelming dominance of centralized giants in both AI and compute resources.

Yet, despite these challenges, the promise of decentralization continues to fuel innovation. Blockchain technology provides a unique avenue for democratizing AI development, expanding access to a broader range of developers and users. The future success of this movement will depend on decentralized platforms' ability to overcome current inefficiencies and offer viable, scalable alternatives to the hyper scalers that currently dominate the AI landscape.

Disclaimer: This research report is exactly that — a research report. It is not intended to serve as financial advice, nor should you blindly assume that any of the information is accurate without confirming through your own research. Bitcoin, cryptocurrencies, and other digital assets are incredibly risky and nothing in this report should be considered an endorsement to buy or sell any asset. Never invest more than you are willing to lose and understand the risk that you are taking. Do your own research. All information in this report is for educational purposes only and should not be the basis for any investment decisions that you make.

Both crypto and AI have begun rapidly disrupting multiple industries worldwide in their own right—crypto primarily through the introduction of decentralized financial systems and AI through task and decision-making automation. Interestingly, it is the intersection of these two fields that holds tremendous untapped potential. The promise of decentralized AI systems, driven by blockchain technology, could unlock new levels of transparency, security, and accessibility, transforming how we build and use AI. However, this path has come with its own challenges. The intersection of these two technologies is riddled with technical and economic obstacles that have made achieving competitive and efficient decentralized compute difficult.

This report explores the current state of AI and crypto, highlighting the key developments, technologies, challenges, and potential breakthroughs that will define this hybrid space in 2024 and beyond.

What is Artificial Intelligence (AI)?

The field of Artificial Intelligence (AI) is a rapidly evolving technology sector that aims to replicate human cognitive functions within machines, enabling them to learn from experience, adapt to new and changing situations, and execute tasks that normally would require human intelligence to complete accurately and efficiently. AI leverages complex algorithms and advanced data analysis to construct AI systems, allowing such systems to automate tasks, and some even actively improve their own performance. The latter is evident in an important core subset of AI called machine learning, which focuses on developing algorithms that allow computers to recognize patterns and make their own decisions based on data. 

AI models are trained to identify trends and predict outcomes via three components:

  • Training Data
  • Model Architecture
  • Model Parameters

The training data refers to any foundational information made available to the AI algorithm, providing it with examples from which the algorithm can learn. The model architecture defines the overall structure and layers of the model itself, while the parameters are fine-tuned bounds from which the algorithm can operate, optimizing accuracy over time.

AI algorithms need very large, processed datasets with specified training models in order to generate general solutions for specific use cases. After deployment, these models can be used for many types of practical applications. For example, within the cryptocurrency sector, AI has been leveraged for enhanced security, automated processes, and even improving user experiences by integrating data-driven insights and automated decision-making.

An illustrative example of how data is used in AI training. Source

Machine Learning

Machine Learning (ML) models are computational frameworks that execute tasks without being directly programmed to do so. ML models have streamlined operations through a systematic pipeline that encompasses data collection, training, and inference, allowing an ML model to not only perform a task but actively learn and improve at it.

Overall, there are three established methodologies for building ML models: supervised, unsupervised, and reinforcement learning - each of which comes with its own unique approach to data interpretation and educational strategy. 

  1. Supervised Learning: This strategy allows the developers to take the role of the “teacher,” in which they provide examples and guide an ML model during the learning process. For example, an ML model designed to identify dogs from images would receive a dataset labeled with pictures of dogs. The model’s objective would be to learn the distinguishing features of dogs to classify them, with direct guidance on the part of the developers being the deliberate choice of the labeled dataset.
  2. Unsupervised Learning: Unlike supervised learning, unsupervised learning does not rely on labeled datasets. Models, such as Large Language Models (LLMs) like GPT-4 and LLaMa, learn to identify patterns and relationships within the data autonomously. This form of learning is pivotal for understanding complex datasets without predefined categories or labels.
  3. Reinforcement Learning: This strategy includes a trial-and-error learning process approach, where the model learns to make decisions based on the feedback it receives from its environment. Reinforcement learning is most effective in sequential decision-making tasks, such as robotic control or strategic games like chess, where the model improves its performance based on the direct outcomes of its actions.

ML models are built and distributed through both open-source platforms (like Hugging Face’s model hub) and through proprietary APIs from companies like OpenAI. This opens up ML models to a wide variety of applications and use cases, including anything from academic research to commercial product integrations. This is what is known as the application layer - the user-facing end of the AI technology stack.

The application layer incorporates AI models into tangible products and services. These applications, which can be business-to-business (B2B) or business-to-consumer (B2C), leverage the underlying ML models to provide highly innovative solutions.

Large Language Models (LLMs)

Large language models (LLMs) are some of the fastest-growing AI builds in the world today. There is fierce competition between corporations, governments, and start-ups to gather the necessary resources required to build complex and highly intelligent LLMs, including:

  • Compute
  • Energy
  • Data

Arguably, of these three resources, the most important is compute. This is largely due to the reliance on highly advanced and specialized graphic processing units (GPUs), notably the A100, H100, and upcoming B100 models from NVIDIA, which are absolutely crucial for training LLMs. Following the launch of ChatGPT, there was actually a significant supply shortage, though demand is now beginning to stabilize thanks to improved algorithms, alternative chips, and ramped-up production.

Image1
Source

Now, because of the heavy reliance on GPUs, there is an equally heavy energy demand as GPUs require immense amounts of energy to run. It has been estimated that by 2030, data centers running GPUs could actually account for upwards of 9% of all the energy demand in the United States. Because of this, major companies like Microsoft and Amazon have been exploring alternatives, including nuclear power, to meet this demand and ensure there is adequate energy to continually expand LLM development.

Lastly, LLMs aren’t effective without significant quantities of data. As models like GPT-4 have shown, the quantity and quality of the training dataset are what set AI models apart. GPT-4 was trained on 12 trillion tokens sourced from the internet, including sites like Wikipedia and Reddit. Future models like GPT-5 are expected to require up to 100 trillion tokens, far surpassing the availability of new public data.

LLm size
Source

The scarcity of high-quality training data is creating significant challenges. Many platforms, like Reddit and Twitter, are closing off access to their data or charging high fees while legal battles over copyright violations have intensified. Publications and creators are now suing AI companies for using copyrighted material without permission, while others are forming licensing agreements to monetize their data.

The race for data is not just about training the next generation of AI models, it also has far-reaching consequences for the future of the internet and content ownership. Due to this problem, decentralized solutions like Grass protocol, are offering promising ways of ensuring fair access to data, helping creating new opportunities for AI-driven applications.

Source

Problems with Web2 AI

So far, generative AI systems have succeeded at producing outputs based on patterns learned during training, but the lack of transparency behind the training itself presents a significant challenge. This is what is often referred to as the “black box” issue - where the lack of transparency in the development process makes it difficult for both users and even developers to understand how (and why) the AI system is making certain decisions.

The black box issue creates immense complexities for carrying out audits or raising legal/ethical concerns. In other words, if it is impossible to ascertain how and why outputs are generated, it makes it extremely difficult to understand what the AI system is even doing in the first place. Misusing AI tools, like the generation of biased, incorrect, or copyrighted content, further exacerbates the problem.

Much of this is the result of a high degree of centralization within AI development, as a handful of large American tech companies are leading the charge. This concentration of development power does raise data sovereignty and governance issues, with these major tech corporations having some of the deepest data mines on Earth. This level of utter dominance massively stifles competition, with smaller companies facing potentially significant barriers to entry - especially as the AI arms race creates major scarcity for the limited resources of compute power and energy.

Source

Decentralized AI, utilizing blockchain technology, offers a promising potential solution to the above. For one, it aims to democratize AI development by distributing ownership, governance, and oversight, making AI more transparent and accessible. Open development makes it much easier for our researchers and academics to understand precisely how it's developing and to what extent its intelligence has grown. It also lowers the barrier to entry for independent developers to utilize, creating more competition, increasing innovation, and driving more value creation. This could help balance the influence of major tech companies and promote a more inclusive AI future.

Crypto x AI

While it may not seem like an obvious synergy at first glance, the crossover between AI and cryptocurrency represents one of the most compelling trends in technology. A key facet of the convergence between these two fields is simply the magnitude of the open-source innovation that is happening, particularly evident in models on platforms like Hugging Face.

Hugging Face hosts over 530,000 different models for research and application purposes, making it one of the most comprehensive, collaborative AI libraries on earth. Similar to how GitHub and Discord have become indispensable tools within the crypto ecosystem for code hosting and community management, Hugging Face plays this role in AI collaboration.

Source

What Hugging Face ultimately proves is that AI and open development can and do go hand in hand. As for blockchain and AI specifically, the benefits blockchain offers to AI innovation and development are formulated into two primary categories:

  1. AI-enhancing crypto applications
  2. Crypto-Driven disruptions of traditional AI pipelines

The former refers to potential utilities like improving blockchain analytics and integrating AI model outputs into permissionless protocols.  The latter relates more closely to the benefits of the blockchain ledger and decentralized governance in promoting better open-source development of the typical AI creation pipeline.

Overall, the existing qualities of each technology sector formulate together in a unique way that generates a number of different synergies, as shown below:

Source

Centralized vs. Decentralized AI World

While current AI model development points to tech giants developing centralized, general purpose large-scale AI models, there are several reasons to believe that the future will consist of many smaller, highly specialized models. The “many-models” world holds merit because purely centralized AI development is not necessarily the most economically friendly moving forward.

There are several reasons that medium to small businesses (and potentially even large corporations) would opt for smaller, more specialized models versus centralized large-scale models being developed by firms like Microsoft, Google, and OpenAI:

  1. Centralization/Platform Risk: The problem with a single closed-source provider from large, centralized AI companies is that they could change terms, adjust models, or stop offering services altogether - without warning. 
  2. Privacy: AI is deeply embedded in organizational workflows, and many companies are hesitant to entrust sensitive data to large, centralized models. Specialized models, especially those running locally or on privacy-focused platforms allow companies to maintain control over their data.
  3. Cost-Performance Tradeoff: General-purpose AI models are expensive to train and operate. While they can handle a wide range of tasks, they are often overpowered and overpriced for many use cases. Research indicates that smaller, specialized models can outperform larger general-purpose models in specific domains. As AI reaches scale, businesses will increasingly seek models that offer optimal performance at lower costs for their specific needs.

These three factors point toward a more fragmented, decentralized market being preferred over a highly concentrated, centralized AI market. Should the many-models world come to pass, developers and businesses could utilize open-source models like LLaMA and MistralAI, fine-tuning them with their own proprietary data to create specialized applications.

In short, this points to the future of AI being highly modular, where developers and entrepreneurs compete to deliver value to users through a combination of varying models and services. This modularity will require new infrastructure for tasks like routing, orchestration, payments, and coordination. However, to understand how AI can achieve a high degree of decentralization, several key questions must be addressed:

  • Who can build these models? - The development of AI models traditionally requires significant resources and technical expertise, which can limit accessibility and control to a few well-equipped entities.
  • Who has the data? - Data is the cornerstone of AI. The ownership and control over data sources are crucial in determining who can train effective models.
  • Who has the resources? - Computational power and financial resources are necessary for training sophisticated AI models, often putting these capabilities beyond the reach of individuals or smaller organizations.
  • Who is creating products atop these models? - The applications and services developed using AI models can influence various sectors, but the creators of these products often dictate the technology's direction and implementation.

Current Status of Crypto x AI

In 2024, the crossover between cryptocurrency and AI can be broken down into three primary categories: infrastructure, resources, and problem-solving.

Source: Grayscale Investments. Protocols included are illustrative examples.

The infrastructure layer includes networks that provide platforms for AI development, such as NEAR, TAO, and Fetch.ai. These networks focus on building open, permissionless architectures to support various decentralized services, including AI, and offer incentives to developers through tokens (e.g., NEAR or TAO tokens).

Next, as AI development demands massive amounts of computing power, storage, and data, decentralized projects like Render (RNDR), Akash (AKT), and Livepeer (LPT) provide access to idle GPU resources for demanding use cases like AI. Storage solutions like Filecoin (FIL) and Arweave (AR) offer secure, decentralized alternatives for AI data storage, reducing reliance on centralized systems like AWS, where data may be unsafe. This is crypto’s own AI application layer, with a number of solutions emerging for various pain points in AI development and deployment.

Source

Lastly, there are numerous protocols attempting to solve AI-related problems, including addressing deep fakes and bots. Worldcoin (WLD) offers biometric-based solutions to verify human identity, theoretically combating bots. Origin Trail (TRAC), Numbers Protocol (NUM), and Story Protocol focus on verifying content to help prevent deep fake-driven misinformation. Creating solutions to these issues is highly prevalent as there are increasing concerns over the potential misuse and even weaponization of AI, helping to maintain transparency and security over AI-generated content.

How Crypto Helps AI

Data Integrity

The integrity of the data AI systems are trained on is crucial for producing reliable outcomes. Blockchain’s tamper-resistant ledger ensures data authenticity and immutability, providing AI models with trustworthy information. Additionally, blockchain can verify data ownership through digital signatures and timestamped entries, ensuring that AI only accesses data from legitimate sources. This level of transparency is particularly valuable in fields like intellectual property and research, helping to prevent data theft, fraud, and ownership disputes.

Decentralized Storage

Decentralized storage is a blockchain-based approach that involves storing data on a network of nodes instead of a centralized server. Decentralized storage offers several advantages over centralized storage. First, it provides greater data sovereignty, as users can maintain control over their own data. Second, it's more resilient to cyberattacks and outages, as the data is distributed across a network of nodes. Finally, it can be more cost-effective, as users only pay for their storage space.

Decentralized storage solutions offer worthy alternatives to traditional, centralized cloud services, particularly AWS and Google, and give AI startups and developers the opportunity to diversify away from these major tech companies - leading to potentially significant cost savings and improved data privacy and authenticity protections.

In relative terms, decentralized storage networks are generally cheaper than their centralized counterparts. Decentralized storage networks, such as Storj, range from 70% to 99% cheaper than Amazon S3. Sia and Filecoin achieve lower costs by leveraging an open marketplace model where the price of storage is a function of supply and demand cycles. Storj, however, uses fixed pricing for storage use on its network. Arweave, which guarantees perpetual storage, prices its storage with a ~200-year time horizon, making its upfront costs higher than those of its decentralized competitors.

Additionally, decentralized storage does provide improved protection against data piracy, offering improved privacy protections and ensuring that large centralized entities cannot misuse data. Secondary to this is data stored on decentralized servers also generally benefits from being verifiable and authenticated, a direct benefit of the transparency of blockchain technology. Mechanisms like decentralized identifiers and blockchain-based data hashes offer potential solutions for affirming the authenticity and origin of digital content - something especially important as the lines between human and AI content get continually blurred.

However, these innovations are not without their own challenges. Despite potential cost and privacy advantages, regulatory, technical, and operational factors complicate the transition from centralized cloud storage to decentralized platforms. It is largely still a young and immature subsector of blockchain, something that AI developers and emerging companies will have to weigh against the centralization risks when choosing how to store their sensitive data.

Decentralized Access to AI Compute, Machine Learning

The costs of training and developing AI models have skyrocketed, with only a few industry giants able to afford participation. OpenAI, for instance, is projected to spend over $3 billion on AI training costs in 2024, with estimates rising to $10 billion by 2025. This steep rise in costs has led to growing concerns about the increasing dominance of a few large players, effectively shutting out smaller competitors from accessing the resources necessary to innovate in the AI space.

Source

In response to this issue, decentralized compute projects have emerged, offering promising solutions by leveraging unused global computing power. These platforms aim to create a more accessible and affordable alternative to centralized AI systems. Decentralized computing offers cost advantages, promotes resilience against censorship, and fosters equitable access to AI resources, ultimately encouraging greater innovation.

A significant challenge, however, lies in the bandwidth required for large-scale AI training. Traditional AI clusters, housed within centralized data centers and composed of uniform GPUs, rely on extremely fast communication speeds—up to 800 GB/s—necessary for efficient training. In comparison, decentralized consumer-level networks, such as those using residential internet bandwidth, cannot support the same high-level communication demands, making decentralized training infeasible for large models.

Yet, blockchain technology has proven that decentralized coordination of computational power is possible, as demonstrated by networks like Bitcoin, which surpass the computational power of even the largest AI training clusters. Despite this, the critical challenge remains: overcoming the bandwidth bottleneck required for real-time communication between GPUs, a key component of AI training.

Innovative approaches in decentralized compute are beginning to address these issues. Projects are exploring new machine learning training techniques that reduce the need for frequent communication between nodes, improving the efficiency of distributed models. These advancements, alongside fault-tolerant training methods and cryptographic incentives, are paving the way for decentralized AI training at scale.

Examples of platforms leading this effort include:

  • Bittensor: Bittensor’s "Proof of Intelligence" mechanism incentivizes collaborative AI model training and inference, rewarding participants who contribute to the development of the network, ultimately creating a decentralized AI platform.
  • Gensyn: Gensyn focuses on decentralized machine learning by verifying the accuracy of external computations, ensuring that AI models can be trained across a distributed network.
  • Akash Network: A decentralized marketplace for cloud computing resources. Using a proof-of-stake model, Akash connects users with compute providers through a reverse auction, offering competitive pricing and broad accessibility.

Technologies for Decentralized Compute

Zero-Knowledge Machine Learning (zkML)

An area of highly innovative development is that of zero-knowledge machine learning (zkML). zkML stems from the principles of zero-knowledge proofs, a cryptographic method that lets anyone prove the validity of a statement without revealing its underlying information. zkML applies the methodology of zk-proofs to verify the inputs and outputs of AI models without exposing the model’s proprietary aspects, like its weights or underlying algorithms.

This provides some great enhancements to ML models, including boosting the transparency and accountability of AI models, addressing privacy concerns, and improving user trust. For example, zkML can hide a user’s query from the model owner, offering a privacy-preserving alternative to the current practice of most ML models logging every transaction.

Fully Homomorphic Encryption (FHE)

Fully Homomorphic Encryption (FHE) plays a critical role in safeguarding data privacy and security, especially within decentralized AI systems. FHE allows computations to be performed directly on encrypted data, ensuring the information stays secure even during processing. This is highly advantageous for cloud computing, where data vulnerability is a major concern. FHE prevents exposure of sensitive information, reducing the risks of data breaches and enhancing trust in AI systems, especially in regulated industries that require stringent data protection.

How AI Helps Crypto

AI Agents

AI agents are advanced software systems capable of automating tasks and decision-making processes within blockchain ecosystems. They are powered by LLMs and reinforcement learning (RL) techniques, and they can interact autonomously with decentralized applications (dApps) by executing tasks without the need for human intervention.

Blockchain’s decentralized infrastructure provides the perfect environment for AI agents, where smart contracts enable the autonomous execution of transactions. Currently, most AI agents are relatively rudimentary, often programmed to handle very specific tasks like interacting with financial protocols. However, as technology advances, agents will become more capable of handling various functions, helping streamline user interactions with blockchain networks.

Use In DeFi

One of the most promising areas where AI is making an impact is DeFi. AI agents can manage DeFi protocols, taking on tasks such as liquidity provisioning, liquidations, and even portfolio rebalancing without manual input.

Some more specific examples include:

  • User Intent Matching: AI agents help users translate their investment goals into actionable on-chain strategies, from simple transactions to complex portfolio management.
  • Action Planning and Routing: With blockchain infrastructure constantly evolving, AI agents can optimize transaction routing, factoring in variables like speed, security, and cost. For instance, AI agents can determine the most efficient Layer 2 solutions for transaction execution.
  • Shared Funds and Asset Pools: AI agents can help manage collective investments, such as community-governed funds, leveraging blockchain’s transparency and shared ownership models.

Scaling Shared Ownership and Governance with AI

As AI agents take on more responsibilities in blockchain ecosystems, issues of shared ownership and governance are more or less inevitable side effects. Blockchain provides a range of governance models, from minimal systems that resemble Bitcoin’s lean protocol to complex frameworks that utilize DAOs (Decentralized Autonomous Organizations).

In systems with shared ownership, AI agents can ensure the equitable distribution of value generated within a community, aligning the agents' goals with those of the broader network. Blockchain governance structures, particularly those based on DAOs, will play a crucial role in managing AI agents in a fair and transparent manner.

Blockchain Security

AI agents contribute to blockchain security by automating the detection and response to security threats. For instance, they can monitor transactions in real time, reducing the risk of human error, which is a common cause of many security breaches. However, the increasing use of AI agents also raises concerns about user control and privacy, particularly in terms of how much autonomy should be given to these agents.

Potential Issues with Crypto x AI

Decentralization and Its Challenges

Due to the strenuous computational demands, efficiently training LLMs requires specific, tightly connected hardware clusters. This presents logistical hurdles for decentralized compute, as models that can accomplish this are impractical for most home setups. So, most users turn to cloud services, which brings about problems of data privacy and potential monopolization by centralized providers like AWS.

This is not to say that there isn’t potential within distributed computing for model training. One such example is the expansion of Differential Data Flow, which is attempting to scale across decentralized networks. However, the current landscape still heavily favors localized hardware clusters, and even advanced retail GPUs cannot keep up. At the moment, despite the advancement of research, no Web3 projects efficiently match the efficiency of already existing methods. So, despite the hype around how AI and crypto appear to intersect in many ways, tangible real world adoption is quite limited.

Is It Too Early?

AI is often seen as a centralizing force, and while the idea of using crypto to counterbalance that with decentralization sounds appealing on paper, it is still really just theoretical. This integration is still very much in its infancy, mainly due to costs and technical challenges - especially with on-chain model training (which is notoriously difficult). Right now, most crypto projects with an AI application still operate almost exclusively off-chain, which misses the point of utilizing the blockchain in the first place. Technologies like ZKML, Trusted Execution Environments (TEEs), and cryptoeconomic models show promise but have significant trade-offs that limit their immediate practical value.

Even so, there's a persistent vision of a decentralized, multi-model world where specialized AI models serve distinct, nuanced purposes. However, getting decentralized models off the ground is tough, especially when it comes to gathering proprietary datasets, which is an area dominated by centralized giants like OpenAI and Google. Networks like Helium demonstrate that it's possible to redistribute infrastructure costs in a decentralized way, but AI still faces huge barriers to competing with centralized, generalized models.

Right now, the crypto-AI landscape has a lot of limitations. Decentralized computing marketplaces sound like a great idea, but they struggle with latency and communication issues that make them impractical for large-scale AI training. Initiatives like Hivemapper and Grass AI show potential in data collection but face quality control and integration challenges. At the moment, crypto’s most effective role in AI is as a payment and value-transfer mechanism rather than in decentralized model training, which is still too expensive and inefficient compared to its benefits.

Timing is also a challenge. The Web3 space is often preoccupied with chasing short-term gains fueled by hype. Developers tend to follow investor trends, prioritizing quick profits over long-term innovation. This pattern is familiar in the crypto industry, where market narratives drive decisions, and participants get stuck chasing liquidity instead of focusing on substantial opportunities—especially in AI, which is evolving rapidly on its own.

So, while there is a growing belief that combining crypto and AI will lead to rapid advancements, it's important to stay skeptical. In the near term, crypto’s most realistic applications will probably be in specific areas, like enabling autonomous agents or prediction markets, rather than creating a fully decentralized AI ecosystem.

Is It Too Late?

The AI race is fundamentally about computing power. The more compute you have for both training and inference, the better the results. While optimizations and new architectures aim to address the limitations of traditional processing units, the core challenge remains the speed and scale at which matrix multiplications can be performed over vast amounts of data.

This explains the massive investments by "Hyperscalers" like OpenAI with Microsoft, Anthropic with AWS, and Google and Meta, both increasingly building out their own data centers. These companies are creating synergies between AI models and the hardware that powers them, investing billions to prepare for AI’s growing impact on the global economy. NVIDIA’s CEO, Jensen Huang, estimates AI acceleration investments will reach $2 trillion in the coming years, driven largely by interest from sovereign entities. Analysts predict AI-related data center spending will hit $160 billion in 2024 and over $200 billion in 2025.

In contrast, Web3 and DePIn projects are far behind in their capacity to incentivize AI hardware investments. The current market capitalization of DePIn projects is around $40 billion, tied to illiquid and speculative tokens. Even if these tokens were to double in value over the next few years, reaching $80 billion, this pales compared to the cash spent by Hyperscalers. Web3's decentralized GPU networks and AI infrastructure projects would need billions more in investor demand to fund capital expenditures and operational costs, as well as to drive token value higher to incentivize further growth.

The reality is that much of the so-called "decentralized infrastructure" in Web3 is likely running on the very same Hyperscalers' cloud services it aims to compete against. Furthermore, the increasing demand for GPUs and specialized AI hardware will eventually increase supply, potentially lowering the cost of both cloud services and hardware acquisition. For now, however, the decentralized sector remains far from being able to challenge Hyperscalers' dominance in the AI compute space.

Conclusion

Considering everything, the potential crossover between AI and cryptocurrency stands out as one of the most exciting technological advancements of the 21st century and one of humanity’s best bets for avoiding a highly centralized AI future run by just a small handful of companies. This hybrid approach offers transformative possibilities by merging two complex and disruptive fields. As of 2024, while meaningful progress has been made, significant challenges remain. Decentralized AI has the power to revolutionize industries by providing more secure, transparent, and cost-effective alternatives to centralized systems. However, this vision faces considerable hurdles, such as the high costs of training AI models, the technical limitations of decentralized compute, and the overwhelming dominance of centralized giants in both AI and compute resources.

Yet, despite these challenges, the promise of decentralization continues to fuel innovation. Blockchain technology provides a unique avenue for democratizing AI development, expanding access to a broader range of developers and users. The future success of this movement will depend on decentralized platforms' ability to overcome current inefficiencies and offer viable, scalable alternatives to the hyper scalers that currently dominate the AI landscape.

Disclaimer: This research report is exactly that — a research report. It is not intended to serve as financial advice, nor should you blindly assume that any of the information is accurate without confirming through your own research. Bitcoin, cryptocurrencies, and other digital assets are incredibly risky and nothing in this report should be considered an endorsement to buy or sell any asset. Never invest more than you are willing to lose and understand the risk that you are taking. Do your own research. All information in this report is for educational purposes only and should not be the basis for any investment decisions that you make.

Header

Lorem ipsum dolor sit amet, consectetur adipiscing elit lobortis arcu enim urna adipiscing praesent velit viverra sit semper lorem eu cursus vel hendrerit elementum morbi curabitur etiam nibh justo, lorem aliquet donec sed sit mi dignissim at ante massa mattis.

  1. Neque sodales ut etiam sit amet nisl purus non tellus orci ac auctor
  2. Adipiscing elit ut aliquam purus sit amet viverra suspendisse potenti
  3. Mauris commodo quis imperdiet massa tincidunt nunc pulvinar
  4. Adipiscing elit ut aliquam purus sit amet viverra suspendisse potenti

Header

Vitae congue eu consequat ac felis placerat vestibulum lectus mauris ultrices cursus sit amet dictum sit amet justo donec enim diam porttitor lacus luctus accumsan tortor posuere praesent tristique magna sit amet purus gravida quis blandit turpis.

Subheader

Vitae congue eu consequat ac felis placerat vestibulum lectus mauris ultrices cursus sit amet dictum sit amet justo donec enim diam porttitor lacus luctus accumsan tortor posuere praesent tristique magna sit amet purus gravida quis blandit turpis. Vitae congue eu consequat ac felis placerat vestibulum lectus mauris ultrices cursus sit amet dictum sit amet justo donec enim diam porttitor lacus luctus accumsan tortor posuere praesent tristique magna sit amet purus gravida quis blandit turpis.

Subheader

Vitae congue eu consequat ac felis placerat vestibulum lectus mauris ultrices cursus sit amet dictum sit amet justo donec enim diam porttitor lacus luctus accumsan tortor posuere praesent tristique magna sit amet purus gravida quis blandit turpis. Vitae congue eu consequat ac felis placerat vestibulum lectus mauris ultrices cursus sit amet dictum sit amet justo donec enim diam porttitor lacus luctus accumsan tortor posuere praesent tristique magna sit amet purus gravida quis blandit turpis.

  • Neque sodales ut etiam sit amet nisl purus non tellus orci ac auctor
  • Adipiscing elit ut aliquam purus sit amet viverra suspendisse potenti
  • Mauris commodo quis imperdiet massa tincidunt nunc pulvinar
Odio facilisis mauris sit amet massa vitae tortor.

Lorem ipsum dolor sit amet, consectetur adipiscing elit lobortis arcu enim urna adipiscing praesent velit viverra sit semper lorem eu cursus vel hendrerit elementum morbi curabitur etiam nibh justo, lorem aliquet donec sed sit mi dignissim at ante massa mattis. Lorem ipsum dolor sit amet, consectetur adipiscing elit lobortis arcu enim urna adipiscing praesent velit viverra sit semper lorem eu cursus vel hendrerit elementum morbi curabitur etiam nibh justo, lorem aliquet donec sed sit mi dignissim at ante massa mattis. Lorem ipsum dolor sit amet, consectetur adipiscing elit lobortis arcu enim urna adipiscing praesent velit viverra sit semper lorem eu cursus vel hendrerit elementum morbi curabitur etiam nibh justo, lorem aliquet donec sed sit mi dignissim at ante massa mattis.

Vitae congue eu consequat ac felis placerat vestibulum lectus mauris ultrices cursus sit amet dictum sit amet justo donec enim diam porttitor lacus luctus accumsan tortor posuere praesent tristique magna sit amet purus gravida.

Lorem ipsum dolor sit amet, consectetur adipiscing elit lobortis arcu enim urna adipiscing praesent velit viverra sit semper lorem eu cursus vel hendrerit elementum morbi curabitur etiam nibh justo, lorem aliquet donec sed sit mi dignissim at ante massa mattis. Lorem ipsum dolor sit amet, consectetur adipiscing elit lobortis arcu.

Interesting types examples to check out

Vitae congue eu consequat ac felis placerat vestibulum lectus mauris ultrices cursus sit amet dictum sit amet justo donec enim diam porttitor lacus luctus accumsan tortor posuere praesent tristique magna sit amet purus gravida quis blandit turpis.

Odio facilisis mauris sit amet massa vitae tortor.

Subscribe to Get Access Now

Gain access to private bi-weekly calls with our analyst team and 10+ weekly paywalled research updates.

$89
Monthly
Stay informed as a crypto investor
Access to full archive of research content
Live bi-weekly analyst calls with our research team
$899
Annually
$169 off
Stay informed as a crypto investor
Access to full archive of research content
Live bi-weekly analyst calls with our research team