In my previous entry, I made the conjecture that since PCIe performance is not getting faster quickly enoughvendors are going to move to proprietary interconnects. I am making the assumption that this move will happen, as the market will require it.
What does that world look like? What does the interconnect look like, and what does the switch look like? Right now, high performance computing is dominated by programs written to communicate via MPI (Message Passing Interface). Since it took over 10 years to write many of the complex codes changing them over, the next few years is out of the question. So most, if not all, of the scientific and engineering codes will be written with MPI. There are some codes and parts of other codes that must communicate node-to-node, while other codes need to communicate to the whole network of nodes.
This, of course, means that the network design needs to be different for different codes or parts of different codes. This is a costly undertaking. It is far easier to design, and therefore cheaper to communicate to, nearby nodes quickly than it is to communicate to faraway nodes quickly. It is almost like you need two networks, and that is what might happen.
What if one network interface was developed that allowed higher-speed, low-latency communication to nearby nodes and another network interface, and it was connected to a specialized switch for local communication? What if the other interface was designed for more global communication with its own specialized switch? This likely could even be done today with multiple InfiniBand connections to different switches. It would, however, require some modification to how the topology is addressed. I would not be surprise if some time before 2015 this type of technology becomes widely available on standard x86 hardware.