Bitcoin Sparplan 2020: Nutze den Cost Average Effekt

BlockTorrent: The famous algorithm which BitTorrent uses for SHARING BIG FILES. Which you probably thought Bitcoin *also* uses for SHARING NEW BLOCKS (which are also getting kinda BIG). But Bitcoin *doesn't* torrent *new* blocks (while relaying). It only torrents *old* blocks (while sync-ing). Why?

This post is being provided to further disseminate an existing proposal:
This proposal was originally presented by jtoomim back in September of 2015 - on the bitcoin_dev mailing list (full text at the end of this OP), and on reddit:
https://np.reddit.com/btc/comments/3zo72i/fyi_ujtoomim_is_working_on_a_scaling_proposal/cyomgj3
Here's a TL;DR, in his words:
BlockTorrenting
For initial block sync, [Bitcoin] sort of works [like BitTorrent] already.
You download a different block from each peer. That's fine.
However, a mechanism does not currently exist for downloading a portion of each [new] block from a different peer.
That's what I want to add.
~ jtoomim
The more detailed version of this "BlockTorrenting" proposal (as presented by jtoomim on the bitcoin_dev mailing list) is linked and copied / reformatted at the end of this OP.
Meanwhile here are some observations from me as a concerned member of the Bitcoin-using public.
Questions:
Whoa??
WTF???
Bitcoin doesn't do this kind of "blocktorrenting" already??
But.. But... I thought Bitcoin was "p2p" and "based on BitTorrent"...
... because (as we all know) Bitcoin has to download giant files.
Oh...
Bitcoin only "torrents" when sharing one certain kind of really big file: the existing blockchain, when a node is being initialized.
But Bitcoin doesn't "torrent" when sharing another certain kind of moderately big file (a file whose size, by the way, has been notoriously and steadily growing over the years to the point where the system running the legacy "Core"/Blockstream Bitcoin implementation is starting to become dangerously congested - no matter what some delusional clowns "Core" devs may say): ie, the world's most wildly popular, industrial-strength "p2p file sharing algorithm" is mysteriously not being used where the Bitcoin network needs it the most in order to get transactions confirmed on-chain: when a a newly found block needs to be shared among nodes, when a node is relaying new blocks.
https://np.reddit.com/Bitcoin+bitcoinxt+bitcoin_uncensored+btc+bitcoin_classic/search?q=blocktorrent&restrict_sr=on
How many of you (honestly) just simply assumed that this algorithm was already being used in Bitcoin - since we've all been told that "Bitcoin is p2p, like BitTorrent"?
As it turns out - the only part of Bitcoin which has been p2p up until now is the "sync-ing a new full-node" part.
The "running an existing full-node" part of Bitcoin has never been implemented as truly "p2p2" yet!!!1!!!
And this is precisely the part of the system that we've been wasting all of our time (and destroying the community) fighting over for the past few months - because the so-called "experts" from the legacy "Core"/Blockstream Bitcoin implementation ignored this proposal!
Why?
Why have all the so-called "experts" at "Core"/Blockstream ignored this obvious well-known effective & popular & tested & successful algorithm for doing "blocktorrenting" to torrent each new block being relayed?
Why have the "Core"/Blockstream devs failed to p2p-ize the most central, fundamental networking aspect of Bitcoin - the part where blocks get propagated, the part we've been fighting about for the past few years?
This algorithm for "torrenting" a big file in parallel from peers is the very definition of "p2p".
It "surgically" attacks the whole problem of sharing big files in the most elegant and efficient way possible: right at the lowest level of the bottleneck itself, cleverly chunking a file and uploading it in parallel to multiple peers.
Everyone knows torrenting works. Why isn't Bitcoin using it for its new blocks?
As millions of torrenters already know (but evidently all the so-called "experts" at Core/Blocsktream seem to have conveniently forgotten), "torrenting" a file (breaking a file into chunks and then offering a different chunk to each peer to "get it out to everyone fast" - before your particular node even has the entire file) is such a well-known / feasible / obvious / accepted / battle-tested / highly efficient algorithm for "parallelizing" (and thereby significantly accelerating) the sharing of big files among peers, that many people simply assumed that Bitcoin had already been doing this kind of "torrenting of new-blocks" these whole past 7 years.
But Bitcoin doesn't do this - yet!
None of the Core/Blockstream devs (and the Chinese miners who follow them) have prioritized p2p-izing the most central and most vital and most resource-consuming function of the Bitcoin network - the propagation of new blocks!
Maybe it took someone who's both a miner and a dev to "scratch" this particular "itch": Jonathan Toomim jtoomim.
  • A miner + dev who gets very little attention / respect from the Core/Blockstream devs (and from the Chinese miners who follow them) - perhaps because they feel threatened by a competing implementation?
  • A miner + dev who may have come up with the simplest and safest and most effective algorithmic (ie, software-based, not hardware-consuming) scaling proposal of anyone!
  • A dev who who is not paid by Blockstream, and who is therefore free from the secret, undisclosed corporate restraints / confidentiality agreements imposed by the shadowy fiat venture-capitalists and legacy power elite who appear to be attempting to cripple our code and muzzle our devs.
  • A miner who has the dignity not to let himself be forced into signing a loyalty oath to any corporate overlords after being locked in a room until 3 AM.
Precisely because jtoomim is both a indepdendent miner and an independent dev...
  • He knows what needs to be done.
  • He knows how to do it.
  • He is free to go ahead and do it - in a permissionless, decentralized fashion.
Possible bonus: The "blocktorrent" algorithm would help the most in the upload direction - which is precisely where Bitcoin scaling needs the most help!
Consider the "upload" direction for a relatively slow full-node - such as Luke-Jr, who reports that his internet is so slow, he has not been able to run a full-node since mid-2015.
The upload direction is the direction which everyone says has been the biggest problem with Bitcoin - because, in order for a full-node to be "useful" to the network:
  • it has to able to upload a new block to (at least) 8 peers,
  • which places (at least) 8x more "demand" on the full-node's upload bandwidth.
The brilliant, simple proposed "blocktorrent" algorithm from jtoomim (already proven to work with Bram Cohen's BitTorrent protocol, and also already proven to work for initial sync-ing of Bitcoin full-nodes - but still un-implemented for ongoing relaying among full-nodes) looks like it would provide a significant performance improvement precisely at this tightest "bottleneck" in the system, the crucial central nexus where most of the "traffic" (and the congestion) is happening: the relaying of new blocks from "slower" full-nodes.
The detailed explanation for how this helps "slower" nodes when uploading, is as follows.
Say you are a "slower" node.
You need to send a new block out to (at least) 8 peers - but your "upload" bandwidth is really slow.
If you were to split the file into (at least) 8 "chunks", and then upload a different one of these (at least) 8 "chunks" to each of your (at least) 8 peers - then (if you were using "blocktorrenting") it only would take you 1/8 (or less) of the "normal" time to do this (compared to the naïve legacy "Core" algorithm).
Now the new block which your "slower" node was attempting to upload is already "out there" - in 1/8 (or less) of the "normal" time compared to the naïve legacy "Core" algorithm.[ 1 ]
... [ 1 ] There will of course also be a tiny amount of extra overhead involved due to the "housekeeping" performed by the "blocktorrent" algorithm itself - involving some additional processing and communicating to decompose the block into chunks and to organize the relaying of different chunks to different peers and the recompose the chunks into a block again (all of which, depending on the size of the block and the latency of your node's connections to its peers, would in most cases be negligible compared to the much-greater speed-up provided by the "blocktorrent" algorithm itself).
Now that your block is "out there" at those 8 (or more) peer nodes to whom you just blocktorrented it in 1/8 (or less) of the time - it has now been liberated from the "bottleneck" of your "slower" node.
In fact, its further propagation across the net may now be able to leverage much faster upload speeds from some other node(s) which have "blocktorrent"-downloaded it in pieces from you (and other peers) - and which might be faster relaying it along, than your "slower" node.
For some mysterious reason, the legacy Bitcoin implementation from "Core"/Blockstream has not been doing this kind of "blocktorrenting" for new blocks.
It's only been doing this torrenting for old blocks. The blocks that have already been confirmed.
Which is fine.
But we also obviously need this sort of "torrenting" to be done for each new block is currently being confirmed.
And this is where the entire friggin' "scaling bottleneck" is occurring, which we just wasted the past few years "debating" about.
Just sit down and think about this for a minute.
We've had all these so-called "experts" (Core/Blockstream devs and other small-block proponents) telling us for years that guys like Hearn and Gavin and repos like Classic and XT and BU were "wrong" or at least "unserious" because they "merely" proposed "brute-force" scaling: ie, scaling which would simply place more demands on finite resources (specifically: on the upload bandwidth from full-nodes - who need to relay to at least 8 peer full-nodes in order to be considered "useful" to the network).
These "experts" have been beating us over the head this whole time, telling us that we have to figure out some (really complicated, unproven, inefficient and centralized) clever scaling algorithms to squeeze more efficiency out of existing infrastructure.
And here is the most well-known / feasible / obvious / accepted / battle-tested algorithm for "parallelizing" (and thereby massively accelerating) the sharing of big file among peers - the BitTorrent algorithm itself, the gold standard of p2p relaying par excellence, which has been a major success on the Internet for decades, at one point accounting for nearly 1/3 of all traffic on the Internet itself - and which is also already being used in one part of Bitcoin: during the phase of sync-ing a new node.
And apparently pretty much only jtoomim has been talking about using it for the actual relaying of new blocks - while Core/Blockstream devs have so far basically ignored this simple and safe and efficient proposal.
And then the small-block sycophants (reddit users or wannabe C/C++ programmers who have beaten into submission and/or by the FUD and "technological pessimism" of the Core/Blockstream devs, and by the censorhip on their legacy forum), they all "laugh" at Classic and proclaim "Bitcoin doesn't need another dev team - all the 'experts' are at Core / Blockstream"...
...when in fact it actually looks like jtoomim (an independent minedev, free from the propaganda and secret details of the corporate agenda of Core/Blockstream - who works on the Classic Bitcoin implementation) may have proposed the simplest and safest and most effective scaling algorithm in this whole debate.
By the way, his proposal estimates that we could get about 1 magnitude greater throughput, based on the typical latency and blocksize for blocks of around 20 MB and bandwidth of around 8 Mbps (which seems like a pretty normal scenario).
So why the fuck isn't this being done yet?
This is such a well-known / feasible / obvious / accepted / battle-tested algorithm for "parallelizing" (and thereby significantly accelerating) the sharing of big files among peers:
  • It's already being used for the (currently) 65 gigabytes of "blocks in the existing blockchain" itself - the phase where a new node has to sync with the blockchain.
  • It's already being used in BitTorrent - although the BitTorrent protocol has been optimized more to maximize throughput, whereas it would probably be a good idea to optimize the BlockTorrent protocol to minimize latency (since avoiding orphans is the big issue here) - which I'm fairly sure should be quite doable.
This algorithm is so trivial / obvious / straightforward / feasible / well-known / proven that I (and probably many others) simply assumed that Bitcoin had been doing this all along!
But it has never been implemented.
There is however finally a post about it today on the score-hidden forum /Bitcoin, from eragmus:
[bitcoin-dev] BlockTorrent: Torrent-style new-block propagation on Merkle trees
https://np.reddit.com/Bitcoin/comments/484nbx/bitcoindev_blocktorrent_torrentstyle_newblock/
And, predictably, the top-voted comment there is a comment telling us why it will never work.
And the comment after that comment is from the author of the proposal, jtoomim, explaining why it would work.
Score hidden on all those comments.
Because the immature tyrant theymos still doesn't understand the inherent advantages of people using reddit's upvoting & downvoting tools to hold decentralized, permissionless debates online.
Whatever.
Questions:
(1) Would this "BlockTorrenting" algorithm from jtoomim really work?
(2) If so, why hasn't it been implemented yet?
(3) Specifically: With all the "dev firepower" (and $76 million in venture capital) available at Core/Blockstream, why have they not prioritized implementing this simple and safe and highly effective solution?
(4) Even more specifically: Are there undisclosed strategies / agreements / restraints imposed by Blockstream financial investors on Bitcoin "Core" devs which have been preventing further discussion and eventual implementation of this possible simple & safe & efficient scaling solution?
Here is the more-detailed version of this proposal, presented by Jonathan Toomim jtoomim back in September of 2015 on the bitcoin-dev mailing list (and pretty much ignored for months by almost all the "experts" there):
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Septembe011176.html
As I understand it, the current block propagation algorithm is this:
  1. A node mines a block.
  2. It notifies its peers that it has a new block with an inv. Typical nodes have 8 peers.
  3. The peers respond that they have not seen it, and request the block with getdata [hash].
  4. The node sends out the block in parallel to all 8 peers simultaneously. If the node's upstream bandwidth is limiting, then all peers will receive most of the block before any peer receives all of the block. The block is sent out as the small header followed by a list of transactions.
  5. Once a peer completes the download, it verifies the block, then enters step 2.
(If I'm missing anything, please let me know.)
The main problem with this algorithm is that it requires a peer to have the full block before it does any uploading to other peers in the p2p mesh. This slows down block propagation to:
O( p • log_p(n) ) 
where:
  • n is the number of peers in the mesh,
  • p is the number of peers transmitted to simultaneously.
It's like the Napster era of file-sharing. We can do much better than this.
Bittorrent can be an example for us.
Bittorrent splits the file to be shared into a bunch of chunks, and hashes each chunk.
Downloaders (leeches) grab the list of hashes, then start requesting their peers for the chunks out-of-order.
As each leech completes a chunk and verifies it against the hash, it begins to share those chunks with other leeches.
Total propagation time for large files can be approximately equal to the transmission time for an FTP upload.
Sometimes it's significantly slower, but often it's actually faster due to less bottlenecking on a single connection and better resistance to packet/connection loss.
(This could be relevant for crossing the Chinese border, since the Great Firewall tends to produce random packet loss, especially on encrypted connections.)
Bitcoin uses a data structure for transactions with hashes built-in. We can use that in lieu of Bittorrent's file chunks.
A Bittorrent-inspired algorithm might be something like this:
  1. (Optional steps to build a Merkle cache; described later)
  2. A seed node mines a block.
  3. It notifies its peers that it has a new block with an extended version of inv.
  4. The leech peers request the block header.
  5. The seed sends the block header. The leech code path splits into two.
  6. (a) The leeches verify the block header, including the PoW. If the header is valid,
  7. (a) They notify their peers that they have a header for an unverified new block with an extended version of inv, looping back to 2. above. If it is invalid, they abort thread (b).
  8. (b) The leeches request the Nth row (from the root) of the transaction Merkle tree, where N might typically be between 2 and 10. That corresponds to about 1/4th to 1/1024th of the transactions. The leeches also request a bitfield indicating which of the Merkle nodes the seed has leaves for. The seed supplies this (0xFFFF...).
  9. (b) The leeches calculate all parent node hashes in the Merkle tree, and verify that the root hash is as described in the header.
  10. The leeches search their Merkle hash cache to see if they have the leaves (transaction hashes and/or transactions) for that node already.
  11. The leeches send a bitfield request to the node indicating which Merkle nodes they want the leaves for.
  12. The seed responds by sending leaves (either txn hashes or full transactions, depending on benchmark results) to the leeches in whatever order it decides is optimal for the network.
  13. The leeches verify that the leaves hash into the ancestor node hashes that they already have.
  14. The leeches begin sharing leaves with each other.
  15. If the leaves are txn hashes, they check their cache for the actual transactions. If they are missing it, they request the txns with a getdata, or all of the txns they're missing (as a list) with a few batch getdatas.
Features and benefits
The main feature of this algorithm is that a leech will begin to upload chunks of data as soon as it gets them and confirms both PoW and hash/data integrity instead of waiting for a fully copy with full verification.
Inefficient cases, and mitigations
This algorithm is more complicated than the existing algorithm, and won't always be better in performance.
Because more round trip messages are required for negotiating the Merkle tree transfers, it will perform worse in situations where the bandwidth to ping latency ratio is high relative to the blocksize.
Specifically, the minimum per-hop latency will likely be higher.
This might be mitigated by reducing the number of round-trip messages needed to set up the BlockTorrent by using larger and more complex inv-like and getdata-like messages that preemptively send some data (e.g. block headers).
This would trade off latency for bandwidth overhead from larger duplicated inv messages.
Depending on implementation quality, the latency for the smallest block size might be the same between algorithms, or it might be 300% higher for the torrent algorithm.
For small blocks (perhaps < 100 kB), the BlockTorrent algorithm will likely be slightly slower.
Sidebar from the OP: So maybe this would discourage certain miners (cough Dow cough) from mining blocks that aren't full enough:
Why is [BTCC] limiting their block size to under 750 all of a sudden?
https://np.reddit.com/Bitcoin/comments/486o1u/why_is_bttc_limiting_their_block_size_to_unde

For large blocks (e.g. 8 MB over 20 Mbps), I expect the BlockTorrent algo will likely be around an order of magnitude faster in the worst case (adversarial) scenarios, in which none of the block's transactions are in the caches.

One of the big benefits of the BlockTorrent algorithm is that it provides several obvious and straightforward points for bandwidth saving and optimization by caching transactions and reconstructing the transaction order.

Future work: possible further optimizations
A cooperating miner [could] pre-announce Merkle subtrees with some of the transactions they are planning on including in the final block.
Other miners who see those subtrees [could] compare the transactions in those subtrees to the transaction sets they are mining with, and can rearrange their block prototypes to use the same subtrees as much as possible.
In the case of public pools supporting the getblocktemplate protocol, it might be possible to build Merkle subtree caches without the pool's help by having one or more nodes just scraping their getblocktemplate results.
Even if some transactions are inserted or deleted, it [might] be possible to guess a lot of the tree based on the previous ordering.
Once a block header and the first few rows of the Merkle tree [had] been published, they [would] propagate through the whole network, at which time full nodes might even be able to guess parts of the tree by searching through their txn and Merkle node/subtree caches.
That might be fun to think about, but probably not effective due to O( n2 ) or worse scaling with transaction count.
Might be able to make it work if the whole network cooperates on it, but there are probably more important things to do.
Leveraging other features from BitTorrent
There are also a few other features of Bittorrent that would be useful here, like:
  • prioritizing uploads to different peers based on their upload capacity,
  • banning peers that submit data that doesn't hash to the right value.
Sidebar from the OP: Hmm...maybe that would be one way to deal with the DDoS-ing we're experiencing right now? I know the DDoSer is using a rotating list of proxies, but still it could be a quick-and-dirty way to mitigate against his attack.
DDoS started again. Have a nice day, guys :)
https://np.reddit.com/Bitcoin_Classic/comments/47zglz/ddos_started_again_have_a_nice_day_guys/d0gj13y
(It might be good if we could get Bram Cohen to help with the implementation.)
Using the existing BitTorrent algorithm as-is - versus tailoring a new algorithm optimized for Bitcoin
Another possible option would be to just treat the block as a file and literally Bittorrent it.
But I think that there should be enough benefits to integrating it with the existing bitcoin p2p connections and also with using bitcoind's transaction caches and Merkle tree caches to make a native implementation worthwhile.
Also, BitTorrent itself was designed to optimize more for bandwidth than for latency, so we will have slightly different goals and tradeoffs during implementation.
Concerns, possible attacks, mitigations, related work
One of the concerns that I initially had about this idea was that it would involve nodes forwarding unverified block data to other nodes.
At first, I thought this might be useful for a rogue miner or node who wanted to quickly waste the whole network's bandwidth.
However, in order to perform this attack, the rogue needs to construct a valid header with a valid PoW, but use a set of transactions that renders the block as a whole invalid in a manner that is difficult to detect without full verification.
However, it will be difficult to design such an attack so that the damage in bandwidth used has a greater value than the 240 exahashes (and 25.1 BTC opportunity cost) associated with creating a valid header.
Related work: IBLT (Invertible Bloom Lookup Tables)
As I understand it, the O(1) IBLT approach requires that blocks follow strict rules (yet to be fully defined) about the transaction ordering.
If these are not followed, then it turns into sending a list of txn hashes, and separately ensuring that all of the txns in the new block are already in the recipient's mempool.
When mempools are very dissimilar, the IBLT approach performance degrades heavily and performance becomes worse than simply sending the raw block.
This could occur if a node just joined the network, during chain reorgs, or due to malicious selfish miners.
Also, if the mempool has a lot more transactions than are included in the block, the false positive rate for detecting whether a transaction already exists in another node's mempool might get high for otherwise reasonable bucket counts/sizes.
With the BlockTorrent approach, the focus is on transmitting the list of hashes in a manner that propagates as quickly as possible while still allowing methods for reducing the total bandwidth needed.
Remark
The BlockTorrent algorithm does not really address how the actual transaction data will be obtained because, once the leech has the list of txn hashes, the standard Bitcoin p2p protocol can supply them in a parallelized and decentralized manner.
Thoughts?
-jtoomim
submitted by ydtm to btc [link] [comments]

Bitcoin dev IRC meeting in layman's terms (2016-01-21)

Once again my attempt to summarize and explain the weekly bitcoin developer meeting in layman's terms. Link to last summarisation
Disclaimer
Please bear in mind I'm not a developer so some things might be incorrect or plain wrong. There are no decisions being made in these meetings, but since a fair amount of devs are present it's a good representation. Copyright: Public domain

Logs

Main topics

Short topics

0.11 backport release for chainstate obfuscation

background

As some windows users might have experienced in the past, anti-virus software regularly detects values in the bitcoin database files which are false-positives. Thereby deleting those files and corrupting the database. To prevent this from happening developers discussed a way to obfuscate the database files and implemented it last year. While downgrading after upgrading is possible, if you start from a new 0.12 installation or you've done a -reindex on 0.12 it's impossible to downgrade to 0.11 (without starting from scratch).

meeting comments

The proposed pull-request detects the obfuscation in 0.11 so it throws a relevant error message. To avoid this in the future it would be good to have versionnumbers for the chainstate.

meeting conclusion

Release a 0.11 backport release right after the 0.12 final release to avoid confusion.

C++11 update

background

C++11 is an update of the C++ language. It offers new functionalities, an extended standard library, etc. Zerocash had to be written with some c++11 libraries and some IBLT simulation code was written in c++11, which they want to recycle for the eventual core commit.

meeting comments

All changes needed for C++11 have gone in and it's ready to switch. Cfields talked to the travis team and all the features needed (trusty, caching) will be ready by the end of the month, so he proposes to wait until then to flip the switch. Wangchung from f2pool indicated he would not run code that required a C++11 compiler. No one knows what his exact concerns are. Wumpus notes the gitian-built executables don't need any special OS support after the C++11 switch.

meeting conclusion

Wait for Travis update to switch to C++11. Talk to wangchung about his concerns.

EOL Policy / release cycles

background

In general bugfixes, translations and softforks are maintained for 2 major releases. btcdrak proposed to makes this official into a software life-cycle document for bitcoin core in order to inform users what to expect and developers what to code for. Pull request for this document. Given the huge 0.12 changelog jonasschnelli asks whether shorter release cycles might be a good idea. Currently there's a +/- 6 month release cycle.

meeting comments

Gmaxwell notes he doesn't know how useful the backports are given there's no feedback about them, but thinks the current policy is not bad. "I am observing the backports appear to be a waste of time. From a matter of principle, I think they are important, but the industry doesn't appear to agree." If no one is using the backports, it might not see sufficient testing. People generally agree with the 2 major releases approach.
The cyclelength also contributes to frustration and pressure to get features in, as it won't see the light of day for 6 months if it doesn't make the new release. For users it's not really better to have more frequent major releases, as upgrading may not always be a trivial process. There's also a lot of work going into releases. If the GUI and wallet where detached there could be more frequent releases for that part.

meeting conclusion

Policy will be: final release of 0.X means end-of-life of 0.(X-2), which means a 1 year support on the 6 month cycle.

Participants

wumpus Wladimir J. van der Laan gmaxwell Gregory Maxwell jonasshnelli Jonas Schnelli cfields Cory Fields btcdrak btcdrak sipa Pieter Wuille jtimon Jorge Timón maaku Mark Friedenbach kangx_ ??? Kang Zhang ??? sdaftuar Suhas Daftuar phantomcircuit Patrick Strateman CodeShark Eric Lombrozo bsm117532 Bob McElrath dkog ?dkog? jeremias ??? Jeremias Kangas ??? 

Comic relief

jonasschnelli maaku: refactoring? We have a main.cpp. We don't need refactoring. :) gmaxwell jonasschnelli: can we move everything back into main.cpp? I'd save a lot of time grepping. :P wumpus #endmeeting lightningbot` Meeting ended Thu Jan 21 19:55:48 2016 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) btcdrak wumpus: hole in one maaku Did it right this time! gmaxwell Hurray! 
submitted by G1lius to Bitcoin [link] [comments]

[Index] Scaling Conference Overview: Day 1

Conference Topic \ Speaker \ Time Link
Privacy \ Fungibility
Scalability
Smart Contracts
Proof of Work
submitted by KarmaNote to Bitcoin [link] [comments]

Torrent-style new-block propagation on Merkle trees | Jonathan Toomim (Toomim Bros) | Sep 23 2015

Jonathan Toomim (Toomim Bros) on Sep 23 2015:
As I understand it, the current block propagation algorithm is this:
  1. A node mines a block.
  2. It notifies its peers that it has a new block with an inv. Typical nodes have 8 peers.
  3. The peers respond that they have not seen it, and request the block with getdata [hash].
  4. The node sends out the block in parallel to all 8 peers simultaneously. If the node's upstream bandwidth is limiting, then all peers will receive most of the block before any peer receives all of the block. The block is sent out as the small header followed by a list of transactions.
  5. Once a peer completes the download, it verifies the block, then enters step 2.
(If I'm missing anything, please let me know.)
The main problem with this algorithm is that it requires a peer to have the full block before it does any uploading to other peers in the p2p mesh. This slows down block propagation to O( p • log_p(n) ), where n is the number of peers in the mesh, and p is the number of peers transmitted to simultaneously.
It's like the Napster era of file-sharing. We can do much better than this. Bittorrent can be an example for us. Bittorrent splits the file to be shared into a bunch of chunks, and hashes each chunk. Downloaders (leeches) grab the list of hashes, then start requesting their peers for the chunks out-of-order. As each leech completes a chunk and verifies it against the hash, it begins to share those chunks with other leeches. Total propagation time for large files can be approximately equal to the transmission time for an FTP upload. Sometimes it's significantly slower, but often it's actually faster due to less bottlenecking on a single connection and better resistance to packet/connection loss. (This could be relevant for crossing the Chinese border, since the Great Firewall tends to produce random packet loss, especially on encrypted connections.)
Bitcoin uses a data structure for transactions with hashes built-in. We can use that in lieu of Bittorrent's file chunks.
A Bittorrent-inspired algorithm might be something like this:
  1. (Optional steps to build a Merkle cache; described later)
  2. A seed node mines a block.
  3. It notifies its peers that it has a new block with an extended version of inv.
  4. The leech peers request the block header.
  5. The seed sends the block header. The leech code path splits into two.
5(a). The leeches verify the block header, including the PoW. If the header is valid,
6(a). They notify their peers that they have a header for an unverified new block with an extended version of inv, looping back to 2. above. If it is invalid, they abort thread (b).
5(b). The leeches request the Nth row (from the root) of the transaction Merkle tree, where N might typically be between 2 and 10. That corresponds to about 1/4th to 1/1024th of the transactions. The leeches also request a bitfield indicating which of the Merkle nodes the seed has leaves for. The seed supplies this (0xFFFF...).
6(b). The leeches calculate all parent node hashes in the Merkle tree, and verify that the root hash is as described in the header.
  1. The leeches search their Merkle hash cache to see if they have the leaves (transaction hashes and/or transactions) for that node already.
  2. The leeches send a bitfield request to the node indicating which Merkle nodes they want the leaves for.
  3. The seed responds by sending leaves (either txn hashes or full transactions, depending on benchmark results) to the leeches in whatever order it decides is optimal for the network.
  4. The leeches verify that the leaves hash into the ancestor node hashes that they already have.
  5. The leeches begin sharing leaves with each other.
  6. If the leaves are txn hashes, they check their cache for the actual transactions. If they are missing it, they request the txns with a getdata, or all of the txns they're missing (as a list) with a few batch getdatas.
The main feature of this algorithm is that a leech will begin to upload chunks of data as soon as it gets them and confirms both PoW and hash/data integrity instead of waiting for a fully copy with full verification.
This algorithm is more complicated than the existing algorithm, and won't always be better in performance. Because more round trip messages are required for negotiating the Merkle tree transfers, it will perform worse in situations where the bandwidth to ping latency ratio is high relative to the blocksize. Specifically, the minimum per-hop latency will likely be higher. This might be mitigated by reducing the number of round-trip messages needed to set up the blocktorrent by using larger and more complex inv-like and getdata-like messages that preemptively send some data (e.g. block headers). This would trade off latency for bandwidth overhead from larger duplicated inv messages. Depending on implementation quality, the latency for the smallest block size might be the same between algorithms, or it might be 300% higher for the torrent algorithm. For small blocks (perhaps < 100 kB), the blocktorrent algorithm will likely be slightly slower. For large blocks (e.g. 8 MB over 20 Mbps), I expect the blocktorrent algo will likely be around an order of magnitude faster in the worst case (adversarial) scenarios, in which none of the block's transactions are in the caches.
One of the big benefits of the blocktorrent algorithm is that it provides several obvious and straightforward points for bandwidth saving and optimization by caching transactions and reconstructing the transaction order. A cooperating miner can pre-announce Merkle subtrees with some of the transactions they are planning on including in the final block. Other miners who see those subtrees can compare the transactions in those subtrees to the transaction sets they are mining with, and can rearrange their block prototypes to use the same subtrees as much as possible. In the case of public pools supporting the getblocktemplate protocol, it might be possible to build Merkle subtree caches without the pool's help by having one or more nodes just scraping their getblocktemplate results. Even if some transactions are inserted or deleted, it may be possible to guess a lot of the tree based on the previous ordering.
Once a block header and the first few rows of the Merkle tree have been published, they will propagate through the whole network, at which time full nodes might even be able to guess parts of the tree by searching through their txn and Merkle node/subtree caches. That might be fun to think about, but probably not effective due to O(n2) or worse scaling with transaction count. Might be able to make it work if the whole network cooperates on it, but there are probably more important things to do.
There are also a few other features of Bittorrent that would be useful here, like prioritizing uploads to different peers based on their upload capacity, and banning peers that submit data that doesn't hash to the right value. (It might be good if we could get Bram Cohen to help with the implementation.)
Another option is just to treat the block as a file and literally Bittorrent it, but I think that there should be enough benefits to integrating it with the existing bitcoin p2p connections and also with using bitcoind's transaction caches and Merkle tree caches to make a native implementation worthwhile. Also, Bittorrent itself was designed to optimize more for bandwidth than for latency, so we will have slightly different goals and tradeoffs during implementation.
One of the concerns that I initially had about this idea was that it would involve nodes forwarding unverified block data to other nodes. At first, I thought this might be useful for a rogue miner or node who wanted to quickly waste the whole network's bandwidth. However, in order to perform this attack, the rogue needs to construct a valid header with a valid PoW, but use a set of transactions that renders the block as a whole invalid in a manner that is difficult to detect without full verification. However, it will be difficult to design such an attack so that the damage in bandwidth used has a greater value than the 240 exahashes (and 25.1 BTC opportunity cost) associated with creating a valid header.
As I understand it, the O(1) IBLT approach requires that blocks follow strict rules (yet to be fully defined) about the transaction ordering. If these are not followed, then it turns into sending a list of txn hashes, and separately ensuring that all of the txns in the new block are already in the recipient's mempool. When mempools are very dissimilar, the IBLT approach performance degrades heavily and performance becomes worse than simply sending the raw block. This could occur if a node just joined the network, during chain reorgs, or due to malicious selfish miners. Also, if the mempool has a lot more transactions than are included in the block, the false positive rate for detecting whether a transaction already exists in another node's mempool might get high for otherwise reasonable bucket counts/sizes.
With the blocktorrent approach, the focus is on transmitting the list of hashes in a manner that propagates as quickly as possible while still allowing methods for reducing the total bandwidth needed. The blocktorrent algorithm does not really address how the actual transaction data will be obtained because, once the leech has the list of txn hashes, the standard Bitcoin p2p protocol can supply them in a parallelized and decentralized manner.
Thoughts?
-jtoomim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <[http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/2015092...[message truncated here by reddit bot]...
original: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Septembe011176.html
submitted by dev_list_bot to bitcoin_devlist [link] [comments]

Encoding/decoding blocks in IBLT, experimets on O(1) block propagation

I've been working on an IBLT written in Java, as well as a project to encode and decode Bitcoin blocks using this IBLT. The main inspiration comes from Gavin Anresens (gavinandresen) excellent writeup on O(1) block propagation, https://gist.github.com/gavinandresen/e20c3b5a1d4b97f79ac2.
The projects are called ibltj (https://github.com/kallerosenbaum/ibltj) and bitcoin-iblt (https://github.com/kallerosenbaum/bitcoin-iblt). In bitcoin-iblt I've run some experiments to find a good value size and a good number of hash functions to use. Have a look at the results at https://github.com/kallerosenbaum/bitcoin-iblt/wiki/BlockStatsTest
I'm very interested in discussing this and listen to your comments. I also need some help to specify other tests to perform. I'm thinking it would be nice to have some kind of "Given that there is no more that 100 differing transactions, I need 867 cells of size 270 B to have <0.1% chance that decoding fails.". Any thoughts on this?
The test bench is pretty capable. I can perform tests on arbitrarily large fake blocks constructed from real world transactions. I can modify the following parameters:
submitted by kallerosenbaum to Bitcoin [link] [comments]

Reducing the block rate instead of increasing the maximum block size | Sergio Lerner | May 11 2015

Sergio Lerner on May 11 2015:
In this e-mail I'll do my best to argue than if you accept that
increasing the transactions/second is a good direction to go, then
increasing the maximum block size is not the best way to do it. I argue
that the right direction to go is to decrease the block rate to 1
minute, while keeping the block size limit to 1 Megabyte (or increasing
it from a lower value such as 100 Kbyte and then have a step function).
I'm backing up my claims with many hours of research simulating the
Bitcoin network under different conditions [1]. I'll try to convince
you by responding to each of the arguments I've heard against it.
Arguments against reducing the block interval
  1. It will encourage centralization, because participants of mining
pools will loose more money because of excessive initial block template
latency, which leads to higher stale shares
When a new block is solved, that information needs to propagate
throughout the Bitcoin network up to the mining pool operator nodes,
then a new block header candidate is created, and this header must be
propagated to all the mining pool users, ether by a push or a pull
model. Generally the mining server pushes new work units to the
individual miners. If done other way around, the server would need to
handle a high load of continuous work requests that would be difficult
to distinguish from a DDoS attack. So if the server pushes new block
header candidates to clients, then the problem boils down to increasing
bandwidth of the servers to achieve a tenfold increase in work
distribution. Or distributing the servers geographically to achieve a
lower latency. Propagating blocks does not require additional CPU
resources, so mining pools administrators would need to increase
moderately their investment in the server infrastructure to achieve
lower latency and higher bandwidth, but I guess the investment would be low.
  1. It will increase the probability of a block-chain split
The convergence of the network relies on the diminishing probability of
two honest miners creating simultaneous competing blocks chains. To
increase the competition chain, competing blocks must be generated in
almost simultaneously (in the same time window approximately bounded by
the network average block propagation delay). The probability of a block
competition decreases exponentially with the number of blocks. In fact,
the probability of a sustained competition on ten 1-minute blocks is one
million times lower than the probability of a competition of one
10-minute block. So even if the competition probability of six 1-minute
blocks is higher than of six ten-minute blocks, this does not imply
reducing the block rate increases this chance, but on the contrary,
reduces it.
3, It will reduce the security of the network
The security of the network is based on two facts:
A- The miners are incentivized to extend the best chain
B- The probability of a reversal based on a long block competition
decreases as more confirmation blocks are appended.
C- Renting or buying hardware to perform a 51% attack is costly.
A still holds. B holds for the same amount of confirmation blocks, so 6
confirmation blocks in a 10-minute block-chain is approximately
equivalent to 6 confirmation blocks in a 1-minute block-chain.
Only C changes, as renting the hashing power for 6 minutes is ten times
less expensive as renting it for 1 hour. However, there is no shop where
one can find 51% of the hashing power to rent right now, nor probably
will ever be if Bitcoin succeeds. Last, you can still have a 1 hour
confirmation (60 1-minute blocks) if you wish for high-valued payments,
so the security decreases only if participant wish to decrease it.
  1. Reducing the block propagation time on the average case is good, but
what happen in the worse case?
Most methods proposed to reduce the block propagation delay do it only
on the average case. Any kind of block compression relies on both
parties sharing some previous information. In the worse case it's true
that a miner can create and try to broadcast a block that takes too much
time to verify or bandwidth to transmit. This is currently true on the
Bitcoin network. Nevertheless there is no such incentive for miners,
since they will be shooting on their own foots. Peter Todd has argued
that the best strategy for miners is actually to reach 51% of the
network, but not more. In other words, to exclude the slowest 49%
percent. But this strategy of creating bloated blocks is too risky in
practice, and surely doomed to fail, as network conditions dynamically
change. Also it would be perceived as an attack to the network, and the
miner (if it is a public mining pool) would be probably blacklisted.
  1. Thousands of SPV wallets running in mobile devices would need to be
upgraded (thanks Mike).
That depends on the current upgrade rate for SPV wallets like Bitcoin
Wallet and BreadWallet. Suppose that the upgrade rate is 80%/year: we
develop the source code for the change now and apply the change in Q2
2016, then most of the nodes will already be upgraded by when the
hardfork takes place. Also a public notice telling people to upgrade in
web pages, bitcointalk, SPV wallets warnings, coindesk, one year in
advance will give plenty of time to SPV wallet users to upgrade.
  1. If there are 10x more blocks, then there are 10x more block headers,
and that increases the amount of bandwidth SPV wallets need to catch up
with the chain
A standard smartphone with average cellular downstream speed downloads
2.6 headers per second (1600 kbits/sec) [3], so if synchronization were
to be done only at night when the phone is connected to the power line,
then it would take 9 minutes to synchronize with 1440 headers/day. If a
person should accept a payment, and the smart-phone is 1 day
out-of-synch, then it takes less time to download all the missing
headers than to wait for a 10-minute one block confirmation. Obviously
all smartphones with 3G have a downstream bandwidth much higher,
averaging 1 Mbps. So the whole synchronization will be done less than a
1-minute block confirmation.
According to CISCO mobile bandwidth connection speed increases 20% every
year. In four years, it will have doubled, so mobile phones with lower
than average data connection will soon be able to catchup.
Also there is low-hanging-fruit optimizations to the protocol that have
not been implemented: each header is 80 bytes in length. When a set of
chained headers is transferred, the headers could be compressed,
stripping 32 bytes of each header that is derived from the previous
header hash digest. So a 40% compression is already possible by slightly
modifying the wire protocol.
  1. There has been insufficient testing and/or insufficient research into
technical/economic implications or reducing the block rate
This is partially true. in the GHOST paper, this has been analyzed, and
the problem was shown to be solvable for block intervals of just a few
seconds. There are several proof-of-work cryptocurrencies in existence
that have lower than 1 minute block intervals and they work just fine.
First there was Bitcoin with a 10 minute interval, then was LiteCoin
using a 2.5 interval, then was DogeCoin with 1 minute, and then
QuarkCoin with just 30 seconds. Every new cryptocurrency lowers it a
little bit. Some time ago I decided to research on the block rate to
understand how the block interval impacts the stability and capability
of the cryptocurrency network, and I came up with the idea of the DECOR+
protocol [4] (which requires changes in the consensus code). In my
research I also showed how the stale rate can be easily reduced only
with changes in the networking code, and not in the consensus code.
These networking optimizations ( O(1) propagation using headers-first or
IBLTs), can be added later.
Mortifying Bitcoin to accommodate the change to lower the block rate
requires at least:
have upgraded.
version 2 as being multiplied by 10.
All changes comprises no more than 15 lines of code. This is much less
than the number of lines modified by Gavin's 20Mb patch.
As a conclusion, I haven't yet heard a good argument against lowering
the block rate.
Best regards,
Sergio.
[0] https://medium.com/@octskyward/the-capacity-cliff-586d1bf7715e
[1] https://bitslog.wordpress.com/2014/02/17/5-sec-block-interval/
[2] http://gavinandresen.ninja/time-to-roll-out-bigger-blocks
[3]
http://www.cisco.com/c/en/us/solutions/collateral/service-providevisual-networking-index-vni/white_paper_c11-520862.html
[4] https://bitslog.wordpress.com/2014/05/02/deco
original: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-May/008081.html
submitted by bitcoin-devlist-bot to bitcoin_devlist [link] [comments]

Blocksizing = Bikeshedding

(definition of "Bikeshedding" on Wikipedia)
"Everyone can visualize a cheap, simple bicycle shed, so planning one can result in endless discussions because everyone involved wants to add a touch and show personal contribution."
Talk is cheap. Everyone has an opinion on easy, hot-button issues.
Devs ACKing on Github and users debating on Reddit get sucked into participating in the never-ending Blocksize BIP Bikeshedding debates (even jstolfi has BIP 99.5!) - because it's easy for everyone to weigh in and give their opinion on the starting value and periodic bump for a simple integer parameter - but meanwhile almost nobody is doing the hard work involving crypto and hashing to implement practical, useful stuff like IBLT or SegWit - or other features that have been missing for so long we've forgotten we even needed them (eg: HD - hierarchical deterministic wallets - without which you can't permanently back up your wallet).
BIP 202 is just the latest example of Blocksizing = Bikeshedding
The latest eposide of out-of-touch devs on Github ACKing yet another blocksize bikeshedding BIP (BIP 202 from jgarzik) is not actual "governance" and will not provide the scaling Bitcoin actually needs.
BIP 202 is wrong because it scales linearly instead of exponentially
https://np.reddit.com/btc/comments/3xf50u/bip_202_is_wrong_because_it_scales_linearly/
It would be like if you were selling a house for $ 200,000 dollars and the buyer originally offered $ 100,000 and then offered $ 100,002 - you wouldn't say you were willing to compromise - you'd simply laugh in their face.
BIP 202 isn't even acceptable as a "compromise".
https://np.reddit.com/btc/comments/3xedu8/a_comparison_of_bip202_bip101_growth_rates_with/cy45fzz
This is one of the reasons why this Blocksize BIP Bikeshedding debate is never-ending: it's easy, lazy, high-profile "executive decision-making" for devs, and easy, ponderous, philosophical pontificating for users, and everyone feels "qualified" to offer their expertise on how to set this one little parameter (which probably doesn't even need to be there in the first place since miners already soft-limit down as needed to avoid orphaning).
Nobody has been able to convincingly answer the question, "What should the optimal block size limit be?" And the reason nobody has been able to answer that question is the same reason nobody has been able to answer the question, "What should the price today be?" – tsontar
https://np.reddit.com/btc/comments/3xdc9e/nobody_has_been_able_to_convincingly_answer_the/
Setting a parameter is easy. Adding features is hard.
It's so much easier to simply propose a parameter versus actually adding any real features which real users really need in real life. There's a long list of much-needed features which none of these devs ever roll up their sleeves and work on, such as:
  • HD: hierachical deterministic wallets (BIP 32), without which it's impossible to back up your wallet permanently
  • simple optimizations and factorings like IBLT / Thin Blocks / Weak Blocks / SegWit
When are we going to get a pluggable policy architecture for Core?
https://np.reddit.com/btc/comments/3v4u52/when_are_we_going_to_get_a_pluggable_policy/
Bikeshedding in politics.
By the way, you can see the parallel in US electoral politics, on forums and comment threads and Facebook, where everyone has a really important opinion they urgently need to share with the world on the eternal trinity of American hot-button issues (abortion and racism and gays) - but nobody really feels like spending the time and effort to come up with solutions for the complicated stuff like education, healthcare, student loans, housing prices, or foreign policy.
It's all just bikeshedding - a way of feeling self-important and getting attention, while the more-important and less-glamorous bread-and-butter nuts-and-bolts real-life user-experience issues get left by the wayside, because they're just too "complicated" and "difficult" and not "sexy" enough for most devs to actually work on.
submitted by ydtm to btc [link] [comments]

Bitcoin dev IRC meeting in layman's terms (2016-01-21)

Once again my attempt to summarize and explain the weekly bitcoin developer meeting in layman's terms. Link to last summarisation
Disclaimer
Please bear in mind I'm not a developer so some things might be incorrect or plain wrong. There are no decisions being made in these meetings, but since a fair amount of devs are present it's a good representation. Copyright: Public domain

Logs

Main topics

Short topics

0.11 backport release for chainstate obfuscation

background

As some windows users might have experienced in the past, anti-virus software regularly detects values in the bitcoin database files which are false-positives. Thereby deleting those files and corrupting the database. To prevent this from happening developers discussed a way to obfuscate the database files and implemented it last year. While downgrading after upgrading is possible, if you start from a new 0.12 installation or you've done a -reindex on 0.12 it's impossible to downgrade to 0.11 (without starting from scratch).

meeting comments

The proposed pull-request detects the obfuscation in 0.11 so it throws a relevant error message. To avoid this in the future it would be good to have versionnumbers for the chainstate.

meeting conclusion

Release a 0.11 backport release right after the 0.12 final release to avoid confusion.

C++11 update

background

C++11 is an update of the C++ language. It offers new functionalities, an extended standard library, etc. Zerocash had to be written with some c++11 libraries and some IBLT simulation code was written in c++11, which they want to recycle for the eventual core commit.

meeting comments

All changes needed for C++11 have gone in and it's ready to switch. Cfields talked to the travis team and all the features needed (trusty, caching) will be ready by the end of the month, so he proposes to wait until then to flip the switch. Wangchung from f2pool indicated he would not run code that required a C++ compiler. No one knows what his exact concerns are. Wumpus notes the gitian-built executables don't need any special OS support after the C++11 switch.

meeting conclusion

Wait for Travis update to switch to C++11. Talk to wangchung about his concerns.

EOL Policy / release cycles

background

In general bugfixes, translations and softforks are maintained for 2 major releases. btcdrak proposed to makes this official into a software life-cycle document for bitcoin core in order to inform users what to expect and developers what to code for. Pull request for this document. Given the huge 0.12 changelog jonasschnelli asks whether shorter release cycles might be a good idea. Currently there's a +/- 6 month release cycle.

meeting comments

Gmaxwell notes he doesn't know how useful the backports are given there's no feedback about them, but thinks the current policy is not bad. "I am observing the backports appear to be a waste of time. From a matter of principle, I think they are important, but the industry doesn't appear to agree." If no one is using the backports, it might not see sufficient testing. People generally agree with the 2 major releases approach.
The cyclelength also contributes to frustration and pressure to get features in, as it won't see the light of day for 6 months if it doesn't make the new release. For users it's not really better to have more frequent major releases, as upgrading may not always be a trivial process. There's also a lot of work going into releases. If the GUI and wallet where detached there could be more frequent releases for that part.

meeting conclusion

Policy will be: final release of 0.X means end-of-life of 0.(X-2), which means a 1 year support on the 6 month cycle.

Participants

wumpus Wladimir J. van der Laan gmaxwell Gregory Maxwell jonasshnelli Jonas Schnelli cfields Cory Fields btcdrak btcdrak sipa Pieter Wuille jtimon Jorge Timón maaku Mark Friedenbach kangx_ ??? Kang Zhang ??? sdaftuar Suhas Daftuar phantomcircuit Patrick Strateman CodeShark Eric Lombrozo bsm117532 Bob McElrath dkog ?dkog? jeremias ??? Jeremias Kangas ??? 

Comic relief

jonasschnelli maaku: refactoring? We have a main.cpp. We don't need refactoring. :) gmaxwell jonasschnelli: can we move everything back into main.cpp? I'd save a lot of time grepping. :P wumpus #endmeeting lightningbot` Meeting ended Thu Jan 21 19:55:48 2016 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) btcdrak wumpus: hole in one maaku Did it right this time! gmaxwell Hurray! 
submitted by G1lius to btc [link] [comments]

[Brainstorming] "Let's Fork Smarter, Not Harder"? Can we find some natural way(s) of making the scaling problem "embarrassingly parallel", perhaps introducing some hierarchical (tree) structures or some natural "sharding" at the level of the network and/or the mempool and/or the blockchain?

https://en.wikipedia.org/wiki/Embarrassingly_parallel
In parallel computing, an embarrassingly parallel workload or problem is one where little or no effort is required to separate the problem into a number of parallel tasks. This is often the case where there exists no dependency (or communication) between those parallel tasks.
What if the basic, accepted, bedrock components / behaviors of the Bitcoin architecture as we currently understand it:
... actually cannot scale without significantly modifying some of them?
Going by the never-ending unresolved debates of the past year, maybe we need to seriously consider that possibility.
Maybe we're doing it wrong.
Maybe we need to think more "outside the box".
Maybe instead of thinking about "hard forks", we should be thinking about "smart forks".
Maybe we can find a scaling solution which figures out a way to exploit something "embarrassingly parallel" about the above components and behaviors.
Even the supporters of most of the current scaling approaches (XT, LN, etc.), in their less guarded moments, have admitted that all of these approaches do actually involve some tradeoffs (downsides).
We seem to be in a dead-end; all solutions proposed so far involve too many tradeoffs and downsides to one group or another; no one approach is able to gain "rough consensus"; and we're starting to give up on achieving massive, natural, "embarrassingly parallel" scaling...
...despite the fact that we have 700 petahashes of mining power, and hard drive space is dirt cheap, and BitTorrent manages to distribute many gigabytes of files around the world all the time, and Google is somehow able to (appear to) search billions of web pages in seconds (on commodity hardware)...
Is there a "sane" way to open up the debate so that any hard-fork(s) will also be as "smart" as possible?
Specifically:
  • (1) Could we significantly modify some components / behaviors in the above list to provide massive scaling, while still leaving the other components / behaviors untouched?
  • (2) If so, could we prioritize the components / behaviors along various criteria or dimensions, such as:
    • (a) more or less unmodifiable / must remain untouched
    • (b) more or less expensive / bottleneck
For example, we might decide that:
  • "bandwidth" (for relaying transactions in the mempool) is probably the most-constrained bottleneck (it's scarce and hard to get without moving - just ask miners on either side of China's Great Firewall, or Luke-Jr who apparently lives in some backwater in Florida with no bandwidth)
  • "hard disk space" (for storing transactions in the blockchain) is probably the least-constrained bottleneck: hard drive space is generally much cheaper and plentiful compared to bandwidth and processing power
  • Some aspects such as the "blockchain" itself might also be considered "least modifiable" - we do want to find a way to append more transactions onto it (which might take up more space on hard drives), but we could agree that we don't want to modify the basic structure of the blockchain.
Examples:
  • SegWit would refactor the merkle trees in the blockchain to separate out validation data from address and amount data, making various types of pruning more natural, which would save on hard drive space (for SVP clients), but I'm not sure if it would save on bandwidth.
  • IBLT (Inverted Bloom Lookup Filters), Thin Blocks, Weak Blocks are all proposals (if I understand correctly) which involve "compressing" the transactions inside a block (using some clever hashing) - at least for purposes of relaying transactions, although (if I understand correctly) later the full, non-compressed block would still eventually have to be stored in the blockchain.
I keep coming up with crazy buzzwords in my head like "Hierarchical Epochs" or "Mempool Sharding" or "Multiple, Disjoint Czars" (???).
Intuitively all of these approaches might involve somehow locally mining a bunch of low-value or high-kB or in-person transactions and then globally consolidating them into larger hierarchical / sharded structures using some "embarrassingly parallel" algorithm (along the lines of MapReduce?).
But maybe I'm just being seduced by my own buzzwords - because I'm having a hard time articulating what such approaches might concretely look like.
The only aspect of any such approach which I can identify as probably being "key" (to making the problem "embarrassingly parallel") would come from the Wikipedia quote at the start of this post:
there exists no dependency (or communication) between those parallel tasks
Applying this to Bitcoin, we might get the following basic requirement for parallelization:
there exists no outputs in common between those parallel (sets of) transactions
TL;DR: Could there be some natural fairly natural ("embarrassingly parallel") way of:
  • decomposing the massive number of transactions in the mempool / in an epoch / among miners;
  • into hierarchical trees, or non-overlapping (disjoint) "shards";
  • and then recomposing them (and serializing them) as we append them to the blockchain?
submitted by ydtm to btc [link] [comments]

Gold, Silber, Bitcoin - Ich bin sauer! Warum? Bitwit - YouTube BITCOIN IM WERT VON 10 MILLIARDEN US DOLLAR GEPRÜFT تخلصي نهائيا من شيب الشعر من أول إستعمال بشكل دائم  بشكل ... BITCOIN PRICE WILL RISE ABOVE $400,000 SAYS ANTHONY ...

It's written in C++ and uses Bitcoin Core itself to read the data, so it's always 100% compliant with the latest Bitcoin release. I abstracted away the database functions, so you can implement "drivers" for any other DB system. I've been playing with it on MySQL but perhaps others would prefer Neo4J or Cassandra for nosql graph analysis. Der Bitcoin-Kurs musste seit dem Allzeithoch vor nun gut zwei Jahren 64 Prozent abgeben, aber im Vergleich mit Kryptowährungen wie Ethereum, XRP oder Bitcoin Cash ist das nur ein leichter Setback. Um es in ein Verhältnis zu setzen: Ethereum, XRP und Bitcoin Cash müssen ihre aktuellen Kurse verzehn- bis verzwanzigfachen, um einen Break-Even zu erreichen. Im Fall von Bitcoin muss der Kurs um ... Value size. The first parameter I analyzed was the value size. That is the part of the cell that stores the transaction slice data. I started with value size 8 bytes, and increased it in each iteration up to 512 bytes. For every value size I used interval halving to find the minimum decodable IBLT. For this test I use 4 hash functions, k=4 ... Bitcoin again demonstrated its value as money without central control. Soon after the Greek crisis, China began to devalue the Yuan. As reported at the time, Chinese savers turned to Bitcoin to protect their accumulated wealth. 2015 Bitcoin chart by Tyler Durden of Zero Hedge. A current positive influencer of Bitcoin price, or at least perception, is the ">Argentinian situation. Argentina’s ... IBLTs store data in key-value pairs. For Bitcoin, values need to be fixed. Using 48 bits from the transactions data hash and a 16-bit sequence number gives a 64-bit key. Keys have a .1% chance of collision (two unique transactions with the same key) – even at one million transactions a block. In the event that an actual collision occurs the transaction will simply wait for the next block ...

[index] [25715] [6405] [32408] [36162] [7481] [9438] [48208] [49209] [4105] [42666]

Gold, Silber, Bitcoin - Ich bin sauer! Warum?

Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. BITCOIN PRICE HITS 2020 HIGH!! $28,000 BTC Target Before We Might See a Pause Says Max Keiser - Duration: 22:48. Crypto News Alerts 4,664 views. New; 22:48. CNBC Analyst Says Massive $50,000 ... That which is boxed must be unboxed. Unless it's filled with snakes. Don't open that. Mining Bitcoin is as easy as installing the mining software on the PC you already own and clicking start. Anyone can do this and see the money start rolling ... Skip navigation Sign in. Search

#