Hi.

Welcome to my blog. I document my adventures in travel, style, and food. Hope you have a nice stay!

Part 2: Why DAGs don't scale without centralisation

Part 2: Why DAGs don't scale without centralisation

Disclaimer: This article was produced for Radix DLT. The full post can be found here

Part 1 focused on the good aspects of DAGs and the improvements they have brought over blockchains. This concluding part looks at the issues DAGs will face in coming years as they try to scale. 

No global state

Blockchains operate through network participants having an overview of the entire ledger at any one time. Through this, all participants (or nodes) are able to check a transaction against ledger history and can check against the threat of double spending. This lies at the core of blockchain technology, with all participants having open and equal access to all transactions.

DAGs, however, operate differently. Because there is no one global state (owing to the transaction-by-transaction approach), the ‘global state’ of a DAG is such that it changes every time there is a transaction.

This is not an issue if all nodes can see all transactions because nodes will still be able to check against historical transactions to ensure there is no double spend. For example, this is how the IOTA Tangle currently operates, with the Tangle stored in full on every node. Because the size of the database would get too big if left unchecked and hard drive requirements would become infeasible, the database is pruned when necessary. This essentially takes the form of a snapshot being taken, enabling nodes to delete all transactions prior to that.

This is not an optimal solution as one of the benefits of blockchain is the immutable and ever-present ledger it keeps. To enable DAGs to avoid this necessary deletion, the DAG can be split into different shards. This works along similar lines to sharding a blockchain – the DAG is split up into lots of different mini DAGs. It is less intensive to process 1/100 of the DAG than it is to process it in its entirety (because they only have to check against a much smaller subset of transactions), and as such more transactions can be processed in a smaller time frame. While all shards still operate to the same protocol, they now only see parts of the ongoing transactions and associated history.

This causes a number of issues.

One downside of sharding a DAG comes with preventing double spending. A DAG can only guard against double spends if nodes have access to all transactions. To take a simple example, consider the DAG is split into ten parts. I present a transaction on the strongest tip of two of these ten shards. Unless there is a node that has sight of both shards, the transactions I present will validate in each of the two shards, thus causing a double spend.  

As the DAG scales, this issue becomes more prevalent. The more shards there are, the less chance of overlaps between shards, and thus the possibility of double spending increases. The simple solution would be for all nodes to have to contact each other for each transaction they see - but that then costs the same as all nodes simply holding the entire DAG.

Furthermore, as opposed to a blockchain like Bitcoin where blocks are being continuously mined in ‘unison’, albeit opposition, by miners, with a DAG hashing only happens when processing new transactions. Malicious actors only need to gain over 33% of total hash power to be able to attack the network, even before it is sharded, and the lack of constant mining (IOTA, for example, currently processes between 1.2-2.4 tps, the vast majority of which are empty transactions) plus the minimal level of transactions makes it vulnerable to an attack.

Secondly, there is no otherwise verifiable and guaranteed list of transactions in timestamp order. Unlike blockchain, which has a block number/verifiable time of block creation, DAGs do not have guaranteed and secure timestamps, as latency/transaction execution time will vary across nodes. This causes issues, not just for double spend but also for any application built to run on the DAG that requires an exact timestamp.

As it is, decentralized security is traded for performance.

At present, the only way for a DAG to guarantee against double spending and 34% attacks is with the aid of a centralized authority. Byteball, another DAG, has 12 ‘Witness Nodes’ and IOTA has ‘The Coordinator’.

These tools mean that the networks are not censorship resistant and that, should the centralized authority be compromised, the network would be vulnerable to an attack from the centralized state itself.

These are meant to be temporary states for networks in their infancy, but so far, there is no proof they have the means to leave these centralized states behind.

Their presence calls into question the long-term viability of a system. A centralized authority directly contravenes the guiding principles of distributed ledger technology. A project which relies on a centralized authority at the start of its life builds into it a capacity to have a centralized state that could later be reactivated.

Consider what happens if a malicious actor manages to take a significant proportion of nodes out of action (either through an attack on the system or by an ancillary attack e.g. on power grids). Does this mean that the centralized authority is reactivated? And what happens in the event of a DDoS attack on the centralized nodes themselves? A limited number of nodes are much easier to attack than thousands spread worldwide. One of the main selling points of Bitcoin was that it was a distributed network spreadworldwide and as such would be much harder to ever shut down.

There are other issues associated with DAGs too that will hinder scaling to the levels needed.

Life in the real world

In test conditions, variables such as hardware and location are usually optimized or a non-issue (as it can be difficult or unwanted to spin up nodes worldwide). In real-world scenarios, no network can control these factors. That means they need to be prepared for the worst-equipped and worst-located nodes. In a network that provides instant (or near instant, given the limitations of speed of light/internet) confirmations, this causes a problem for further away or slower nodes, which will quickly stop being synchronized with the network and instead begin to see unconfirmed transactions accumulate.

This then prevents new transactions from being resolved as quickly, and the system will start to fill up with more pending transactions. Owing to the architectural differences between blockchain and DAGs, how quickly your transactions are processed will then depend on which node you are connected to – unlike blockchain where the pending transactions/wait times are consistent, transparent and the same for all.

DAGs are capable of scaling beyond current blockchains. But just as blockchains will hit a limit on how much they can scale, so too will DAGs. The network will begin to struggle under its own weight without some form of centralized authority, or a revolutionary (and as yet completely unknown) new sharding technique which doesn’t compromise security, decentralization or performance. Much as blockchains will struggle to scale owing to fundamental design choices, so too will DAGs.

What if…Ethereum was first?

What if…Ethereum was first?

Part 1: Why DAGs don't scale without centralisation

Part 1: Why DAGs don't scale without centralisation