Multihoming Cannot Be Solved within a Network
Henk made an interesting comment that finally triggered me to organize my thoughts about network-level host multihoming1:
The problems I see with routing are: [hard stuff], host multihoming, [even more hard stuff]. To solve some of those, we should have true identifier/locator separation. Not an after-thought like LISP, but something built into the layer-3 addressing architecture.
Proponents of various clean-slate (RINA) and pimp-my-Internet (LISP) approaches are quick to point out how their solution solves multihoming. I might be missing something, but it seems like that problem cannot be solved within the network.
TL&DR: You cannot solve host multihoming within the network layer while having summarizable network addresses. You have to solve it somewhere at the upper edge of the transport layer2.
Imagine a mobile phone with a WiFi and 5G connection provided by two different ISPs. Those ISPs have some upstream ISPs until a pair of them meets at some exchange point.
Host ┌────┐ ┌───────────┐ ┌───────────┐ ┌─────┐ │ │ │ │ │ │ │ │ │ A├───┤ ISP-A ├────┤ ISP-C ├────┤ │ │ │ │ │ │ │ │ │ ┌────────┐ │ │ └───────────┘ └───────────┘ │ │ │ │ │ H │ │ IXP ├────┤ Server │ │ │ ┌───────────┐ ┌───────────┐ │ │ │ │ │ │ │ │ │ │ │ │ └────────┘ │ B├───┤ ISP-B ├────┤ ISP-D ├────┤ │ │ │ │ │ │ │ │ │ └────┘ └───────────┘ └───────────┘ └─────┘
The host (mobile phone) could use node- or interface addresses. Let’s start with the assumption that IPv6 was the worst decision ever, and that node-level addresses of something like CLNP would save the world and bring multihoming to the masses.
Under that scenario, host would be known only by its node address (H). Everyone between the bifurcation point (IXP) and the host needs to know that there are two paths toward the host, and what their state is. That includes ISP-A, ISP-B, and all upstream ISPs including upstream ISPs directly connected to IXP3. Clearly not the brightest idea for ISPs with millions of customers.
OK, maybe IPv6 isn’t as bad as some people would like us to believe. Back to the drawing board. What if we would have two addresses on the multihomed host (one belonging to each ISP), and then (according to RINA) build an overlay network with global host addresses on top of that? Congratulations, you just proved RFC 1925 rule 6a and reinvented LISP.
There are two ways to implement the overlay network idea: either you keep the host TCP stacks unchanged and build the overlay solution within the network (LISP) or you modify the host networking stack.
Let’s stick with the network-based solution for a moment. Regardless of how you plan to implement it, you’ll need a proxy between the “native” protocol stack on the inside (private network) and the “layered” protocol stack on the outside (global Internet). That proxy will have to keep mappings between all underlay and overlay address of all external devices accessing the internal servers. Now ask yourself how big a proxy Facebook would need. Why do you think they never deployed LISP in production?
On top of that, you’ll have to deal with two small details: path liveliness checks (is the remote underlay address reachable?) and cache invalidation (is the remote host mapping still valid?). The latter is supposedly one of the hard problems in computer science. For more details, read the Architectural Implications of Locator/ID Separation IETF draft4.
Back to the drawing board (take two). What if we’d implement multihoming within the hosts? Congratulations, finally you realized that complexity belongs to the network edge (see also: RFC 3439). Not surprisingly, there are widely-deployed solutions using this approach, for example MP-TCP5.
As much as I like MP-TCP, the limitations of its environment force it to remain a half-baked solution. It has to emulate the TCP socket API and thus has to establish the first TCP session with a well-known server IP address, so the server cannot be multihomed to multiple providers. The only way to fix that is to change the socket API (fat chance) and hopefully add session layer to TCP/IP protocol stack while doing that.
- Added TL&DR summary
Site multihoming is an even more gruesome beast that we’ll carefully avoid in this blog post. ↩︎
I might have missed something significant, in which case please free to point it out in a comment. ↩︎
The proof is left as an exercise for the reader. ↩︎
Reading it, you’ll probably realize why (like the Operators and the IETF draft) it was never published as an RFC ;) ↩︎
Not sure whether MP-TCP is widely deployed? When was the last time you’ve seen someone using Siri? ↩︎
I am interested to see if HIP fixes all of this in the future. It looks like it has the potential to be a well-designed solution. It doesn't meet your "widely-deployed" criteria yet but it looks promising and it doesn't feel hackish.
"(according to RINA) build an overlay network with global host addresses on top of that?"
I'm not an expert on RINA, but nowhere have I read that's what RINA proposes: definitely not host address in the routing table. Can you show me where it's written?
"Under that scenario, host would be known only by its node address (H). Everyone between the bifurcation point (IXP) and the host needs to know that there are two paths toward the host, and what their state is. That includes ISP-A, ISP-B, and all upstream ISPs including upstream ISPs directly connected to IXP2."
Why? What's the point of summarization then? The point of addressing that's pushed forward by John Day, father of RINA, which I agree with, is an address should be route-independent. Summarization done with hierarchical addressing can even bound the number of routes in the Internet RIB. With current interface addressing BS, the global RIB is about a million routes and counting. Why does it keep rising? Because it has no boundary condition imposed. Why are network engineers called engineers when they have no understanding of boundary conditions, a fundamental aspect in engineering?
Akamai uses 2-level DNS to implement their infrastructure; it's the same concept as hierarchical addressing, and it works for them. Now coming back to the example, ISP A and B just needs the summary routes of whatever blocks C and D possess, and on and on it goes. How should this fail to work?
Also, in our discussion of the same topic way earlier, you pointed out that money is an incentive, no ISP can make money if hierarchical addressing is used. Fair enough. But the next question is, since money ruins a lot of things in life (IT is no exception), is money the right question to ask? When something is considered national security or national asset too valuable to fall into private hands with dubious interests, like the main grid or the oil-gas pipelines, govts will be responsible for building and managing it. Why is communication infrastructure any different? Isn't it part of national critical infrastructure these days? Why give it to private hands when they can potentially sell out to your enemies?
Back to the global RIB, we have tight-coupling and complexity-collapse problem. Just because it hasn't happened yet (or it might have) doesn't mean it won't. The same way the financial idiots keep claiming that derivatives are all well and good, until they blew up in their faces and were so close to bringing down the whole financial system in 97-98 and 2008; even now the global financial system is fucked up in more ways than one; just a looming failure of Credit Suisse risks bringing down the whole shebang. Global tight-coupling complexity surely is a good thing.
IPv6 helps make things worse, and way worse for those who know hardware. So while 'worst decision ever' is debatable, it's pretty up there among the very bad ones. CLNP recognizes the need for hierarchy in addressing, that's what we can learn from. Should we go back to it? Hell no. IPv4 is simple, and everything simple is good. Start with this simplicity, and improve it. Funny the MPLS guys got it right by accident; they unconsciously implemented hierarchical addressing with MPLS VPN; it worked.
Another point that you mention somewhere is: depending on the scenarios, at worst you might end up with millions of routes in the local network, in exchange for a clean, short global RIB. I don't think that's the case if you implement the hierarchical addressing the right way, but let's just assume this is the case, is it that terrible? The blast radius is contained within the network. OTOH, a global RIB with millions of routes has the potential to result in massive errors that can bring down the whole or a large part of the Internet. Containment vs contagion: a clear win to me.
Looks like John Day is demonized as an anti-IP guy. He's not. Just because he invented RINA, doesn't mean he's gone nuts about it; that's what I like about him: common sense and simplicity. His views on addressing and multihoming are here for anyone with an open mind:
As for complaint on why his book wasn't written from an engineering perspective, well, he already stated that in the book and made clear in the title of the book for those with subtlety. He tried to propose an alternative model of networking based on his decades of experience (no, he's not just an armchair expert in academia), and that hierarchical model works well when applied to IP.
MP-TCP and SCMP are just lipstick on a pig. A typical result of adding more complexity to solve complexity, and still failing to solve it, when the fundamentals are wrong.
Welcome back! I expected a comment from you ;)
Your comment prompted me to summarize the point of the blog post into a simple TL&DR which is then followed by the analysis of all potential solution architectures I can see and what their applications are. I can't see how your comment addresses any of those, if I missed anything please point it out.
> "(according to RINA) build an overlay network with global host addresses on top of that?" > > I'm not an expert on RINA, but nowhere have I read that's what RINA proposes: definitely not host address in the routing table. Can you show me where it's written?
I did not claim that. I said RINA proposes to solve every challenge by adding another layer, finishing with host addresses on the top layer. I might be wrong about that, but that was the only high-level summary I could get from all the materials you sent me. I just pointed out the implications of that approach.
> Why? What's the point of summarization then? The point of addressing that's pushed forward by John Day, father of RINA, which I agree with, is an address should be route-independent.
... I pointed the implications of various approaches. If you feel my conclusions are wrong please point out where I went wrong.
Re RINA, it essentially is a network model, like the OSI. Because of that, it can make use of IP as its addressing scheme no problem. So RINA and IP are not mutually exclusive.
RINA proposes a self-similar network model, making use of one layer, and repeating it as many times as the network engineer needs, so there's no fixed number of layers. For simple networks, 3 should suffice most of the time, but as you need more, you can span more. So it's dynamic and recursive.
Russ White is a proponent of RINA himself, and he too recognizes that OSI is missing a layer, here (in the We are missing a layer section):
Russ misunderstood that RINA implies a fixed number of layers, and an even number, which were corrected in the comment section.
So no, RINA doesn't propose to solve the problem by adding another layer, I hope that much is clear by now.
Back to the example in the blog, if PI addresses are used, then multihoming is solved quite easily. A and B (the host) just do DNS to get the node address of Server, and forward their packets to ISP A or B, which then have the Server IP Block (summary) entry with interface pointing to either ISP C and D, which then end up at Server. And because TCP now uses node address, not interface address, an interface going down won't trigger session reset. That's light weight.
Now if Provider-based address are used, then Server would have 2 node addresses. Say A makes a DNS query for Server. That would return one of the 2 node addresses, and A will pick one and pick an interface to send it to Server, either via ISP A or B -- Akamai uses this 2-level approach, hence my mentioning of them. So if the interface to either ISP A or B goes down midsession, as long as A or B knows the route to the existing node address of Server, session won't reset. Providers can negotiate that. Worst case scenarios, if they don't want to negotiate anything, session resets. But the point is, it then becomes a non-technical issue, not a fundamental one. And nowhere do I see state explosion.
So as I see it, multihoming can be solved using the network with a 2 level addressing structure, much like the IP-MAC pair in existing network, with DNS name mapped to IP, not to MAC, so MAC change doesn't cause IP session reset.
Plus, it has the added bonus of simplifying things, reduces the size of global RIB, and removes the tight-coupling which can lead to global synchronized collapse.
You keep repeating the same thing. Please go and figure out what addresses each box (or ISP) in that diagram uses and what forwarding tables it needs to have, and stop the handwaving tricks like "we use PI addresses... but because TCP uses node addresses, we won't get session resets". Do you have one TCP session or two? Are they tied to link addresses or node addresses? What happens with the TCP session tied to ISP-A address when the link to ISP-A goes down?
I'm pretty sure you'll arrive at one of the choices I mentioned in the blog post. Also, please stop throwing DNS (and Akamai's particular implementation) into the mix. DNS is like portable phone numbers, it's irrelevant to this discussion, unless you want to say "we can't solve the problem in the network layer."
Ivan, the multihoming problem is a problem of delivery (2 interface addresses appear as 2 hosts to the sender, confusing the heck out of them), so it's by nature a network problem.
"Do you have one TCP session or two?"
One. Like I said, there's nothing different from the way things work now except TCP session now uses node address instead of interface address.
"Are they tied to link addresses or node addresses?"
They're tied to node address, so if link addresses go down, sessions won't reset as long as there's a route to the node address present somewhere in the path.
"What happens with the TCP session tied to ISP-A address when the link to ISP-A goes down?"
Then the host connects to Server via ISP B, using link address to ISP B to forward packets to Server. The TCP session remains unchanged (not reset), because the node addresses remain the same. As long as ISP B has an entry to route to Server's network, packets get delivered, session won't reset. Node address is route-independent in that sense. It cares nothing about the failure or non-failure of the underlying link addresses, as long as there's still at least 1 link up that it can use to reach the destination, the session will remain on. Apart from that small detail, it's the same standard datagram delivery throughout the network.
That's where the missing 1 layer comes from. So in current IP communication, we have interface IP address in the IP layer. In the 2 layer addressing, we have TCP's IP (the node address) and interface IP (link address). Only the node networks (the network entry, not individual host entries) go into the global RIB. The interface IP networks can remain private. Hope I make myself clear.
And from that, we can see that the MAC layer is redundant because it performs the same function as the interface IP (both name/identify the interface), so it can be removed, and we retain the same number of layers for simplicity (whether it can be removed due to vested interest of vendors putting so much money into it, is another issue).
Btw, Bela wrote in one of his comments below:
"As soon as you have multiple transmission layers, you have more than just one ID and location separation. Actually, you have a number of addresses to be mapped together, more than two. Like in an MPLS stack those multiple labels on top of each other. This is daily business for telcos for ages and they know how to do it."
Bela just repeated the hierarchical addressing model that John Day mentions in his work, in slightly different wording. Looks like Bela, through his vast experience and knowledge, has also come to see the need for hierarchical addressing for scalability. Needless to say, I agree with him 100% on that.
So in your proposal the host has interface addresses and a node address, and the TCP session is terminated at the node address. Cool.
Next question: which devices and networks know where the node address is, and which ones know the mapping between node address and interface addresses?
"which devices and networks know where the node address is"
Since Host's network blocks (2 in this case) are given to it by ISP A and ISP B from their respective blocks, ISP A and B know their own given blocks, but by default, not the other one's block.
This poses the question: when Server communicates back to Host using ISP B link (because A has failed), because the session is still up (this session uses an IP given by ISP A to establish the initial channel) the node address therefore hasn't changed and when ISP B receives the packet, it wouldn't know where to forward.
This should not be a problem if ISP A and B negotiate to carry routes for their dual-home customers. Then they can place each other's block entries into their RIBs. These will be network entries, not host entries, so no RIB explosion. And resource accounting can be turned on at the entry point to each ISP to keep track of the amount of data that one ISP carries for the other, so they can bill each other accordingly -- or whatever policy they'd agree to.
ISP D and C don't have to keep track of ISP A and B's specific routes. This is where we can use the dynamic layer of nature as proposed by John Day. By having packets getting out of B to D (or A to C) encapsulated with another address from B or A (the total number of layers now is 3, no more), C and D simply forward msg coming back from Server to Host via the same path from which they come, and we have path stickiness with mininum effort. The border routers at B or A are the only ones keeping track of this -- border routers are supposed to be powerful, so they have the hardware needed.
Note that this is what RINA means in a nutshell: do away with a fixed number of layers, and add the capability to spawn another layer if need be (dynamic layering). Will this lead to layer explosion? Not at all. You need max 3 layers if you implement this situation. Most of the time you need only 2 if multi-ISP dual home is not used. This is OS' virtual memory scheme applied to network.
Of course, if C and D are willing to carry A and B's routes as well, then we only need 2 layers. Their RIBs will be bigger, but given that 1) by routing on node address, we considerably reduce the number of global RIB entries and cut back severely on the number of updates due to routing change (node addresses don't change a lot like interface) 2)not every customer needs this level of HA (dual home to 2 ISPs) that should not be an issue.
Also note that in reality, if a company considers itself important enough to have dual-ISP multihoming, very likely they are either trunked or dually connected to each ISP, so total ISP failure is rare. Our network is one of those that need HA -- we serve roughly 60k users, many of them demanding due to the nature of their collaborative work, on a regular basis -- so we have 2 links to our ISP. That's 2 links dual-homed to 1 ISP. In the past 13 years, we never had, not once, a case of total cut-off. That's how rare total failure is when things are simple.
So given that low frequency of total failure, most of the time, if we use node level addressing for TCP sessions, we wouldn't have session reset and our traffic will just traverse one ISP from start to end. So the case of total ISP failure is really an outlier/corner case, can be treated as such if money is a concern: that is, customer can opt to have session reset in those cases instead of paying extra for the 2 ISPs to cooperate with each other on the customer's traffic. Of course, as explained above, if they want even that final bit squeezed out, we can still do it for them using 2-level addressing scheme. We can even reference how the mobile guys implement roaming to learn a few implementational tricks from them. The point is: this becomes a non-tech and non-structural issue.
"which ones know the mapping between node address and interface addresses?"
This should be mostly answered above already. Each ISP knows their own node-level blocks that they give out to customers, and some of each others' block if they want to cooperate -- this is the kind of address that goes into their global BGP RIB. As to interface addresses, they're internal to each router in the network so each router keeps track of their own interface addresses individually; they're local, not shared. The mapping between node and ifc addresses are therefore, maintained individually on the routers, the same way they currently maintain the mapping between IP and MAC, or IP and outgoing ifc, no change at all.
We use Hierarchical FIB to store this mapping (you've had several blog posts discussing H-FIB in the past). So yeah, the hardware forwarding process stays the same as they are now, no change whatsoever.
In fact, I don't see any change at all to all the RIB/FIB structures, or the way routing and forwarding, both intra- and inter-node, are done, compared to now. NONE. The only thing that changes is we add a true Internetwork layer, with true Internetwork Address (the node address) to the mix, and this kind of address goes into the global BGP RIB. The interface addresses are relegated to the local router's internal tables.
So essentially we only make 1 small change to the addressing structure to complete what's missing, and that's it. This is exactly the OS' virtual memory structure, in order to scale the system to very large memory. Everything else, remains the same as now.
I almost stopped at...
> Since Host's network blocks (2 in this case) are given to it by ISP A and ISP B from their respective blocks, ISP A and B know their own given blocks, but by default, not the other one's block.
... and definitely stopped at ...
> This should not be a problem if ISP A and B negotiate to carry routes for their dual-home customers. Then they can place each other's block entries into their RIBs.
You're solving a problem you like talking about, not the problem described in the blog post.
I simply try to solve the problem of 2-way communication between Host and Server, both multihomed, in the situation(s) that you asked. I'm pretty sure that's the problem of the blog, as written in the TLDR section and as shown in the diagram.
According to TLDR "You cannot solve host multihoming within the network layer while having summarizable network addresses. " I pointed out above that summarizable network addresses (network entries, not host entries) are the ones installed in the RIBs of all ISPs in the path. Even when A and B agree to install some of each other's routes, those routes all network routes, not host entries, so I can't see how I contradicted your problem statement.
And nowhere have I seen how Host fails to communicate with Server and vice versa, and so, I can see that multihoming is readily solved in the network. If you notice somewhere along the path where the packets would fail to get delivered to the next node, pls point out.
Of course, if you really meant: You cannot solve host multihoming within the network layer, as in the current OSI IP layer, which is missing half the addressing structure and not a true Internetwork layer, then yes, I agree, we can't solve it in the network layer the way it is now. We need a complete addressing scheme to solve it.
Real life commercially implemented LISP is different. It is using reliable transport and PubSub. Then it has no cache problem. The convergence is usually much faster than with other routing protocols. The size of the EID table is a matter of capacity planning and memory profiles. Here there is something to do. Aggregation could help. A hybridization of the push vs. pull model would be available by selective PubSub subscription. This is not implemented commercially yet, but we are pushing for this. :-) We have just finished a large validation exercise successfully in a consortia of 10 big companies for using a specially profiled version of LISP called Ground Based LISP (GB-LISP) for multilink mobility in aviation networks. It uses a special protocol on the air-ground link. GB-LISP is limited to integrate radio access networks and ground users in the ground networks. This will be further improved to fulfill all the strict requirements for a safety critical application. Now it is under codification as one of the recommended implementations for upcoming ICAO and EUROCA/RTCA standards. The major IETF LISP RFCs have been recently updated from Experimental to Standards Track. This is driven by finding new use cases that are better than the original ideas. Another two rounds of IETF LISP standardizations are foreseen...
Thank you! Maybe I should use "O-LISP" for "Original LISP" ;)
I totally agree that properly fine-tuned LISP is useful, and I'm glad to hear you're pushing it forwarding. I guess we can also agree that LISP is not a solution to global host multihoming the way it was claimed to be.
Actually, the telephony network, including mobiles, is already upgraded for a ID vs. location separation. This is called number portability and enforced by laws and regulation, so you have no freedom to ignore it or go any other ways. It works with large portability databases for millions or even hundred millions of users. Of course, this is specially designed for this challenge and costly. But it still has a positive business case at the end. So if those old telcos can solve similar problems, then why do you think it is not possible to solve it for IP networks with a similar scheme? I know the devil is the details, but not all services have the same continuity requirements. Most of the services can tolerate some drops and some delays to make an address resolution. For highly demanding services, of course, you need other methods. But this is a small minority only. That is where we promote a selective simulcast solution. I agree that the best to do such simulcast at the application layer. Our company was a driver behind the radio voice linked session EUROCAE standards. This is a double simulcast between the voice application components. It is also needed for having a maximum independence of the two alternate network paths. Our company has also pushed forward the simulcast scheme for surveillance information. This is now used all over the air traffic control domain for radar sensor data with up 4-times redundancy. Since simulcast is costly, especially in the narrowband safety critical networks, simulcast should be selective by application or traffic class on limited radio links. You would apply it only when it is absolutely needed. Otherwise, you would just do failover at certain speed as required. By the way, the latest UAS standards promote that the link failover decision is at the hand of the remote pilot. We are now fighting with this. since uncoordinated path selection could make congestion even worse...
You're comparing apples and cheeseburgers. Phone number portability is really a DNS function (and we both know how convoluted call setups can be).
It's easier to solve some things (like number portability) if you have multiple name spaces, and you don't have to involve all of them in fast path.
Anyway, I should have been more precise: you cannot solve host multihoming within the network layer.
I had a vague feeling we'd had this discussion before :) Here's the blog post I wrote at that time:
I am just using MP-TCP for my Internet connection. As millions and millions of users. But not at the application layers, rather as an underlay transmission solution. What IP guys tend to forget, that a real network is a hierarchy of a large number of transport layers. Not the OSI model. That has nothing to do with real wide area networks. People should study the ITU-T G.800 series. That is where real networks are properly modeled. We have a cultural problem here, denial of telco know-how...
As soon as you have multiple transmission layers, you have more than just one ID and location separation. Actually, you have a number of addresses to be mapped together, more than two. Like in an MPLS stack those multiple labels on top of each other. This is daily business for telcos for ages and they know how to do it.
On the other hand telcos are using SCTP for ages inside their infrastructure. Because they could use software specifically designed for SCTP, they do not need backward compatibility to socket APIs. And it works well for high-availability and "multi-homing".
This multi-homing is a problem only if you are limited to old enterprise feature sets on your devices. It is also not taught in most courses how to design high-availability networks... :-)
In many cases, you cannot change your end systems, they are not able to support multi-homing directly, and then you shift the problem to the network if feasible. If you put a multiple attached proxy in front of such a legacy device, then you can achieve a very good availability, since the local link, that is a single point of failure, might be good enough in reliability. Virtual IP addresses are also used quite successfully in such scenarios.
Another simple solution used in safety critical networks is to use multiple end user devices with the same functionality. Then if you have an operational issues, you just move to the next workstation. This is the ultimate high-availability, but of course with some cost.
I am a big fan of the "design to cost" and a risk management approach in engineering. For each cost level you might need different architectures. There is no single solution for everything. We need diversity as well. This also fundamental for safety. It is needed from the business perspective for leveraging you negotiation opportunities...
Thanks for pointing this out -- I got a pointer from someone about a deployed BT solution a long while ago, and still haven't found time to blog about it.
If you're using something similar to what BT is doing, then you have a pair of MP-TCP proxies, effectively reinventing ML-PPP (or SD-WAN).
Am I missing something? Do you have a more interesting setup?
@Bela, I hold great respect for the telco guys who worked hard to organically built their networks over the years to meet the difficult challenges. And you're 100% right with "a real network is a hierarchy of a large number of transport layers. " Very true for mobile networks.
That being said, I have to take side with Ivan when it comes to simplicity and simplification. The reason mobile networks have a large number of transport layers is because they have to keep propping up their network to support different name space, one on top of another, over the years. Now that we have decades of experience, we can look at that model and see if we can simplify here and there, to clean it up and optimize the good bits. Let's face it: having more parts (more complex) is never a good things, and it's a nightmare if something goes wrong; anyone who operates complex systems can attest to that.
I have no doubt LISP can work on smaller scale with optimization and things like path liveless problems is not such a big deal in those networks. Scale matters (a corollary of boundary condition), so things that can't work on a large scale, like OpenFlow for example, can be terrrific solutions at smaller ones. There's no one-size-fit-all (again, you're spot on with that). I think what Ivan pointed out in this case, is how LISP fares at the global Internet scale, and last time you already commented that you never intended for LISP to be a solution for Internet routing, IIRC.
The same can be said for MP-TCP and SCTP. They have flaws, but it doesn't mean they can't be of great use in certain situations. Pointing out their fundamental problems is not the same as saying "let's trash those into the dustbin". Far from it ;) . It's more so that we can all benefit from the findings and hopefully address the situation. .
> having more parts (more complex) is never a good thing
Rule 12. My favorite rule!
At an abstract level I tend to approach issues like this with the questions 1) What information is needed to complete the function and 2) where is the information (knowing that where information is can often be a choice).
That said, it seems that the above conversation has resolved that the issue is a name space one, and there are various ways of achieving that end result.
Yeah, we were discussing one of the two hard things: https://twitter.com/secretGeek/status/7269997868