Sooner or later PCIe, is going to run out of gas for moving data, as the performance has not kept up with the requirements in some parts of the market, especially technical computing. We have gone from PCIe 1.1, which was originally developed to support the graphics industry with a common card slot and was always planned to be used by all peripherals. The original performance of 250 MB/sec per lane in 2004 was very fast compared to CPU speeds and memory bandwidth at the time, but 4x performance in 2012 means PCIe performance is not scaling with either CPU or memory bandwidth.
My belief is that at some point in the not-too-distant future, some vendor is going to place InfiniBand chips on the CPU board, bypassing the PCIe bus. These chips would be connected to HT or QPI channel. The other more likely possibility is that some vendor will develop its open proprietary interconnect. You ask why create a proprietary interconnect? I think the answer is clear: PCIe is not meeting the market needs for technical computing, which requires high-speed communication between thousands of nodes.
If we are going to be able to address the problem, it is not going to be with PCIe. IBM recently announced the P775, which has a proprietary interconnect. Is this the first of a whole series of machines from the vendor community? Of course, only time will tell, but doing critical science with PCIe 4.0, which will have a 2x improvement over PCIe 3.0 arriving sometime in 2016 or so is not going to work for the science community. There needs to be a much more significant improvement in communication performance combine with new algorithms that will reduce the amount of communication or there will not be significant advances.