Is there a possible solution to the block size/decentralization debate? Or will Bitcoin’s blockchain increase at 1MB every block forever?

I remember the block size wars of 2016 and 2017 that brought about BCH and BSV and others.

The clear winner was Bitcoin, the one that’s over $20k a coin. It didn’t win because it was objectively the best, but because the users decided it was. I never bought the other ones and have always stuck with Bitcoin “core.” However, I am still sometimes frustrated over transaction fees.

One of the biggest reasons I heard for not increasing the block size to >1MB is to maintain security and decentralization. If the blockchain grows too large too fast, then it would be harder to run a node/mine, and so the network would become more centralized to those with more resources.

So there’s a tradeoff. Bigger blocks means more transaction throughput and lower fees, but less decentralization. The free market decided that 1MB is the sweet spot. Is there really no way to have both?

I’m no blockchain dev, definitely not smart enough. But what if we implemented something like what MINA does with ZKP or a ZK rollup or a “snapshot,” etc., of the earliest 200GB blocks once the blockchain reaches 1000GB? Gigabytes 1-200 are rolled up and a “snapshot” is made (surely those transactions are immutable by that point?), and the whole blockchain size is brought down to about 800GB. Once it reaches 1000GB again, then the earliest 200GB are rolled up again. This way the blockchain size never reaches more than 1TB and the block size could increase to around 4MB or something.

I’ve read the Bitcoin Standard, I understand transactions cannot be free, there has to be skin in the game. But if throughput could be increase 4x, lower fees (never 0 fees though), and the blockchain was forever kept under 1TB, wouldn’t that be amazing? Is it possible?

If someone knows more about this, or if a dev reads this, please let me know why it would/wouldn’t work.

tl;dr Is it possible to increase block size and use ZK technology to increase bitcoin tx throughput while keeping the blockchain a small size?

View Source

6 thoughts on “Is there a possible solution to the block size/decentralization debate? Or will Bitcoin’s blockchain increase at 1MB every block forever?”

  1. As long as its working, it will probably not be discussed. Periods of clogged mempool like during the hype phases are generally accepted I think.

    If you keep the block size low, you will also make people use the blockchain in a more economical way. If you increase it now, people will just use it more wasteful. In the end 1MB or 2MB blocks would probably not make any difference. Also a lower size motivates Layer 2 development.

    The only way for this debate to become serious if it’s proven over a long period of time that 1MB is not sufficient – that never happened so far.

    No matter what you do, BTC needs to be verifiable by everyone. Under that conditions it’s impossible to scale BTC to the entire world, higher layers are needed. GB Blocks will never be a thing unless we revolutionize storage and internet speeds.

    Reply
  2. The users didn’t decide, the miners held the chain hostage. Users have moved on to other coins, prices haven’t caught up yet

    Reply
  3. Ethereum is going down the road of State Expiry and Statelessness, but you’re not going to be seeing that on Bitcoin any time soon.

    Reply
  4. No way it will stay the same forever. Storage is only going to get cheaper and more accessible, so the “decentralization” argument of small blocks goes into the water there.

    13 years ago, 1TB of storage could be considered as a lot, today it’s peanuts.

    ​

    Imo, if there’s a need to increase blocksizes (too many transactions), then it should just be increased. At the moment 1MB seems to be still sufficient but there’s no reason to stay at 1MB if 1MB wouldn’t be sufficient anymore.

    Reply
  5. >But what if we implemented something like what MINA does with ZKP or a ZK rollup or a “snapshot,” etc., of the earliest 200GB blocks once the blockchain reaches 1000GB? Gigabytes 1-200 are rolled up and a “snapshot” is made (surely those transactions are immutable by that point?), and the whole blockchain size is brought down to about 800GB. Once it reaches 1000GB again, then the earliest 200GB are rolled up again. This way the blockchain size never reaches more than 1TB and the block size could increase to around 4MB or something.

    Pruning already exists in Bitcoin clients, this is not a new concept. But we still do need full “archival” nodes in order to store the entire history.

    If we let the whole history go to let’s say 1000 TB, then who’s going to be able to run these nodes? A very select few institutions, which could easily collude to change history, and steal funds.

    Slight increases in blocksize might be ok, but right now, it would probably just decrease the usage of LN, which is not what we want to do.

    If you want lower fees just move to LN, and if L1 transactions get so expensive that LN becomes less useful, then we can do a block size increase, once it’s actually required.

    TL;DR: storing the whole history of the chain is important, and it becomes harder the bigger the blocks are.

    Reply

Leave a Comment