“Two Generals’ Problem” doesn’t make sense

Two armies are preparing to attack a city from opposite sides. The General of army A is orchestrating the attack, and has decided that they must attack simultaneously at noon on March 3rd to succeed. He sends a sealed, encrypted letter by messenger across the valley to the General of army B, informing him of the plan, but worries that the messenger might be intercepted, so General B may not be informed. (If the attack is not simultaneous, both attacking armies will be destroyed by the defenders.)

As it so happens, General B does receive the message, but knows that General A cannot be sure of this. He sends back a receipt of the attack plan. Then he wonders... will it go through? What if General A does not receive it and decides not to attack, being unsure of B's knowledge? And even if General A sends over an acknowledgment of the plan's receipt, there is no guarantee it will arrive.

Given this faulty communication channel, how will the generals coordinate their plan?

I originally stumbled across this problem in the context of email. Remember how unreliable it used to be, back in the early nineties, how you were never quite sure that your message had gotten through? I explored the idea of return receipts and counter-receipts, but I never quite managed to come up with a satisfactory answer. Then a few days ago, I came across a Wikipedia entry on the Two Generals' Problem while reading about Google's distributed database networks. The article stated that there is no solution to the problem.

This took me by surprise. Surely, after 50 meta-receipts have successfully passed in each direction, surely then both parties would know that the original message was successfully sent. But I couldn't prove it. Part of the problem is that we aren't interested in proof that the message has actually arrived, but rather in both parties being certain that the message has arrived and that the other party knows this. In other words, given an agreed-upon communication protocol, there must be sufficient information on each "side" to conclude that the message has arrived.

Some assumptions

For the sake of simplicity and without loss of generality, I'd like to set the following assumptions:

  • Messages are instantaneous
  • Each message is either delivered (instantaneously) or destroyed (without a trace)
  • Forgery is impossible, due to cryptography
  • If a General repeatedly tries to send a message, it will *eventually* get through

Naïve protocol

General A sends the original message (numbered 0.0) to General B, and continues sending it (messages 0.1, 0.2, 0.3...) until he receives message 1.x (a receipt) from General B. He starts sending a counter-receipt, starting with 2.0. When 100 is reached, it is impossible for either party to be unsure about the status of the original message.

Message 100.x is sent by General A in response to 99.x from B, but we have two problems: 1) How does General B know when to stop sending 99.x? 2) How long should General A keep resending 100.x to let him know? Crap. It's the same problem! But the very existence of messages 90.x through 95.x (for example) tells both parties that message 0.x got through.

Halting problem

Ultimately, this appears to be one of the worst sorts of paradoxes in computing: A halting problem. There comes a point at which the original question has been answered ("Do both parties agree on a battle plan? Yes.") but the algorithm is still running.

I'm still convinced there's a way around it, but I can't seem to find the key. Any ideas?

Responses: 16 so far Feed icon

  1. Cairnarvon says:

    You're confusing "beyond reasonable doubt" with "with absolute certainty", which is the core of this problem.

  2. Tim McCormack says:

    I don't think so. From the point of view of General A, when he gets message 90.x, I think he can be absolutely certain that both parties agree. (And likewise with General B.) Look at it this way: Is it possible to have the same "evidence" (string of received messages on either side) and not have both parties agree?

  3. Cory Capron says:

    Actually, I'm not sure that we can safely make your last assumption.

    "If a General repeatedly tries to send a message, it will eventually get through"

    General A's messenger's rout is set in the problem. He will go "across the valley to the General of army B." No specifications are made as to how many paths across the valley exist. If there is only one path and the messenger is intercepted, there is no reason to believe the interceptors would not continue to block that path.

    Your approach to the problem seems influenced by the prior e-mail format you first encountered it in. The assumption that messages are instantaneous proves also problematic in that sent it is stated that the messengers will travel to deliver the message, we our left to assume under your new condition that the messenger's are not human, and our most likely a program of some sort. This further reinforces my prior point by also suggesting that the interceptors are also program(s). There is no indication in the original problem that either messengers or interceptors would upgrade in anyway, thus if the same program, lets say a virus for example, is sent against the same anti-virus software that killed it once, under the same conditions, than what short of bit rot would keep it from killing the virus again?

    The other problem with instantaneous travel and assuming the messengers are not human is that we than are ruling out the human factor. The problem as is set up here revolves around the fact that neither army alone is strong enough to defeat th defenders. If the messengers are human, and thus in all likelihood members of the two armies, than not only are they finite, but also possibly quite expendable only to a limit, which might hinder the sending of hundreds of letters.

  4. Tim McCormack says:

    @Cory: These would all be valid concerns if we weren't dealing with information theory. As is often the case, the story is only there to help the reader get a quick overview of the problem.

    None of the assumptions I've made changes the essential nature of the question. For example, instantaneous messaging makes the problem easier to tackle, but a solution under that constraint would only need slight modification to fit the original problem.

    The last assumption is special. If it is possible for a message attempt to fail forever, the problem is no longer interesting -- it is impossible to derive a general solution. With the "eventually" assumption, the problem is worth working on.

  5. Cory Capron says:

    "These would all be valid concerns if we weren't dealing with information theory."

    With reguard to the true Two Generals problem I suspect you are right.

    "As is often the case, the story is only there to help the reader get a quick overview of the problem."

    The story does set the conditions of the problem though. It can also change them.

    This reminds me of the story by Philip K. Dick where a man is trapped in a tunnel that shrinks him more and more as he approches the exit. Logically he should never escape by the intended conditions, however he does on account of becoming so small that he slips between the atoms of the tunnel's floor. The logic failed due to the representation being an impure form of the classic mathamatical problem of moving from point A to point B by traveling half the distance with each step. The conditions were changed by the physical form.

    The assumptions that you are making are alterations to a representation of the Two Generals problem, not the pure problem (yours is also slightly different from the wiki representation by the way).

    One thing I find interesting, and I'll admit it's a slight leap, is that by not taking in effects of the representational equivalents (generals, messengers traveling ect.), one has to wonder where that leaves us with the term "letter" and "message."

    let us allow your first assumption, Messages are instantaneous. With this new condition I'm left to wonder how huge a leap it would be to interpret the continuous flow of messages and recepts as a continued line of comunication. From there is it such a huge jump to interpert the messages as not text, but singnals? Say... audio and visual information?

    Now we have turned the flow of signals into a video chat. We are geting and sending instant messsages in a steady flow of two forms, the visual feeds and audio feeds (OUTPUT Oa and Ov and INPUT Ia and Iv). this provides a chack and ballance. If the generals hold up cue cards it works even better!

    Lets say the series of messages that make up the Ov feed our intercepted through the valley but not Oa. This means that General B is looking a blank screen but can hear the attack plan loud and clear. When he responds, Iv goes through but not Ia. What General A gets is a silent film of General B moving his mouth and nodding with a peice of poster that says "We are go for the Noon attack." This would all happen live so scratch the waiting period from the singnals.

    If either general gets a blank screen and no audio, they know the other General's signal is fully intercepted and thus abort. If they get audio or visual feed from the other that indicates their own signal has been intercepted, again they abort. By taking the logical step from sending hundreds of messages instantaniously to live feed video and audio, the problem seems to be very easy to solve.

    But it also seems like a cheat which is why I think the assumptions alter the original pure and represented problem too much to still be considered the orginal problem.

  6. Cory Capron says:

    "the representation being an impure form of the classic mathamatical problem of moving from point A to point B by traveling half the REMAINING distance with each step."

    Devil's in the details.

  7. boarders' paradise says:

    Thanks for re-opening the discussion, Tim. First of all let me apologise if my take on the problem is wrong.

    I thought about the 2 Generals Problem for 1-2 hours and came to the same paradox : although both generals can be 100% sure that both received the initial message + reply, they don’t know when to stop messaging … (“When 100 is reached, it is impossible for either party to be unsure about the status of the original message. […]How long should General A keep resending […] ? Crap. It's the same problem! But the very existence of messages 90.x through 95.x (for example) tells both parties that message 0.x got through.”)

    Therefore I consider this a “halting problem”, too. And I’m also convinced, that there must be a way around it !!

    I’m fascinated by this problem, because obviously we want to find the point, where both generals know that they can attack, but whether we approach from – let’s say - 100 downwards or from 0 messages upwards, the limit seems to constantly “slip away” (like running after a rainbow on the horizon). I hope there are still some fascinated people out there to comment on this …

    As Wikipedia claims the problem is unsolvable, I’m sure there will soon be someone demonstrating why my example is erroneous, but I posted a possible solution to the talk section of the Wikipedia article here: http://en.wikipedia.org/wiki/Talk:Two_Generals%27_Problem

    ……………………………. ……………………………. …………………………….

    A few remarks to this article here:

    • I slightly prefer the problem’s “definition” as stated by Wikipedia. The story is a bit easier to understand there.

    • Why do you have to make some assumptions at the outset ? I think they are unneeded, Wikipedia’s definition provides enough information.

    • “If a General repeatedly tries to send a message, it will eventually get through” This assumption is besides the point. Wikipedia says: “Unfortunately, the valley is occupied by the city's defenders and there's a chance that any given messenger sent through the valley will be captured.” This only means that a message can get lost as well as it can reach the other General. Both outcomes are possible. The probability isn’t important here. The only point is that both Generals have to bear in mind that there is no guarantee that any given message will be delivered. That’s the point that complicates the mutual knowledge of each other’s state of information. If really all messengers get intercepted, then … bad luck, they won’t attack, but – again – that’s not the topic.

    • “Messages are instantaneous.” This assumption is really not needed. I even think it’s not compatible with the original premise and would change it. I think Cory Capron demonstrated this with his TV analogy. But I was wondering if a time assumption would give the Generals any useful information at all ? Let’s say they know that the messenger needs approximately 1 hour to cross the valley. So if after (let’s say) 2 hours and 30 minutes a General has not received a “return receipt message”, he can assume that either of the two messages was intercepted. But can this be of any advantage to the Generals compared to having to assume that the message transfer time can be completely random ?

    • To make the problem a bit easier: It’s not even necessary to assume, that some messages were actually lost. We can assume that ALL were transferred successfully but the Generals have no certainty about this fact and this doubt is the ultimate problem. So instead of dealing with message 1.01 1.02 etc. we can just say message 1, message 2, etc.

    • I think all the comments to Tim’s article so far are besides the point. In my view, this is not a probability issue as Cairnarvon suggests. I would have answered exactly the same thing (“From the point of view of General A, when he gets message 90.x, I think he can be absolutely certain that both parties agree. (And likewise with General B.) Look at it this way: Is it possible to have the same "evidence" (string of received messages on either side) and not have both parties agree?”). By the way, that’s exactly the same impasse that other people were running into here: http://www.codinghorror.com/blog/archives/000951.html Example: I disagree with the General problem being unsolvable...there's a factor that can be utilized to make the probability of success ~= 100% for all cases where a messenger has a -reasonable- chance of sending the message..... -time- I'll assume the messenger succeeds 50% of the time, and that it takes 1 hour to send a messenger. (probably higher success rate with lower time to send) Start by offering an attack time a month in advance. Re-send every hour until you get a confirm. Second general re-sends his confirm every hour until he gets a confirm, then sends a final confirm. The chances of 6 sends ALL failing approach zero. This algorithm would work for 6 sends in 24 hours time, giving 29 more days (or 696 hours) for the first general to receive the final message and confirm the attack. The failure chance in this (somewhat) reasonable argument is 3.0417465060722557176241158493362e-210... that is to say, the same chance that both generals will get struck by lightning or an internal revolution will make the attack moot while they siege the city for a month. I know the puzzle is supposed to ignore 'reality', but when the odds are under the odds of the computer glitching and reporting incorrectly, (or 1-x = 1 in windows calc), I still win. Random on September 13, 2007 12:56 PM answer: Re: random "solving" the two generals problem The problem isn't asking for a solution that works most of the time, it's asking for a solution that works no matter what. The whole point is that such a solution is impossible. It's definitely possible (and not very hard) to come up with a strategy that works 99.99999% of the time (for as many 9s as you want). andy on September 13, 2007 1:11 PM

    I think both the posting with a solution and the posting in reply are wrong. It’s not at all about a strategy that always works. In fact the 2 Generals Problems is layed out in such a manner that the Generals never(!) now for sure that both know they will attack at the given time. It would be enough to find a single counter-example. I hope I found one with the example I posted on the Wiki-article’s talk section.

  8. Zusukar says:

    The Email problem and the General problem are different. In the Email situation, you are only concerned that both parties know that person B got the original email. In the General problem, each General needs to know that the other General will attack. They present this as needing to know that the last message got through.

    To solve the email problem, use the same logic as TCP transmission. Send packet 1 continuously (with a delay between) until you get an acknowledgment for packet 1. Person A knows that the message was received because they got an acknowledgment. Person B knows that if they receive the message again, the ack didn't go through, so they send another. This is basically using your last assumption "If a General repeatedly tries to send a message, it will eventually get through". I don't think this applies to the General problem. There is no guarantee that the message will eventually get through.

    The General problem is more of a trust issue. Each General does not want to attack unless they know the other will attack. Each time a message is sent, they are unsure the other received it. The problem is not that each General knows when and how to attack (the first message). As you said, after 100 received messages, they can be sure both know that detail. The issue is whether the other General knows that they will go through with the plan.

    Each time a message is sent it re-creates this situation. The implication is: 1. The General sending the message will only attack if they know their message is received. 2. They cannot know their message is received until another message is sent, go to 1.

    If one General sends a message that they will attack no matter what, you can solve it the same as the email problem.

  9. Tim McCormack says:

    I think some of the confusion here is resulting from an unclear definition of the goal. I will attempt to restate my reading of the problem:

    Given the scenario described, where messages may not ever arrive, develop an algorithm for which the following statements are true:
    1. If two generals precisely follow the algorithm, they will never fail to coordinate an attack.
    2. There is at least one "input" (say, the initial value for a pseudo-random number generator that decides which messages are lost) for which the generals will decide to attack.

    This turns the problem into an algorithm question.

    Do you think more conditions are necessary to make the formulation useful?

  10. Gilbert says:

    You may want to look at a problem called "Binary Consensus" in the distributed computing literature. It's a generalization of this problem and very clearly stated.

    Binary consensus asks for a protocol that allows k processes to agree on the value 1 or 0. It goes as follows:

    Every process proposes an initial value, either 0 or 1. Processes are allowed to send messages to each other and decide on a value. Once a process has decided on a value, it must terminate. The following conditions must hold:

    Termination: Every (non-faulty) process must eventually decide some value.

    Agreement: All (non-faulty) processes that decide a value must decide the same value.

    Validity: If all processes propose 0, then 0 must be decided. If all processes propose 1, then 1 must be decided.

    The rub in this problem comes if you consider an asynchronous system with failures. This means the following...

    Asynchrony: Every message sent will eventually arrive, but may take arbitrary long to do so.

    Crash Failures: Some process may crash in the middle of the protocol.

    You don't even need to consider communication failure here. It's actually impossible to solve binary consensus in an asynchronous system with the possibility of 1 crash failure. This is the famous FLP result from the 80s (can't remember authors' names off the top of my head, sorry).

    (sorry for the complete lack of formatting)

  11. Cha0s says:

    FLP = Michael J. Fischer, Nancy A. Lynch, and Michael S. Paterson

    Gilbert is correct that this problem is closely related to binary consensus. I originally encountered it as the coordinated attack problem with the following statement: Two kings (k1 and k2) on either side of a great (evil) empire wish to bring down the empire. They meet and decide to assess their forces. After doing so (when the problem "begins"), they have a preference as to whether to attack or not. They can send each other messages (to be very specific, lets say that any message sent is sent at 8 AM on whatever day it is sent and that any message that is received arrives at 8 PM on the same day--these details are arbitrary but make no difference in the problem formulation. Even with these restrictions, the problem is insoluble). Finally, any message sent through the empire may be intercepted. There is no guarantee that any message will ever be received.

    The following are the formal conditions of the problem: 1. Termination: Each king eventually must decide 0 (do not attack) or 1 (attack). 2. Agreement: The decisions (d1 and d2) of both kings must be the same. 3. Validity: Let i1 and i2 denote the preferences of k1 and k2, respectively. Then: -if i1 and i2 are 0, d1 and d2 must be 0 -if i1 and i2 are 1, d1 and d2 must be 1

    This formulation is equivalent to the first formation: though the information they need to agree on is different, the problem is the same. The following is proof of its insolubility: Assume there exists a correct algorithm A. Let E(a,b) be the scenario where i1 = a, i2 = b and all messages are lost. By validity (assuming A is correct), d1=d2=0 in E(0,0) and d1=d2=1 in E(1,1). E(1,1) is indistinguishable from E(1,0) for k1 (remember, no one receives any messages), so k1 decides 1 in E(1,0). Similarly, E(0,0) is indistinguishable from E(1,0) for k2, so k2 decides 0 in E(1,0). Of course this contradicts agreement as d1!=d2 in E(1,0), so A can not exist.

    Remember, for A (any algorithm) to be correct, it must work for all scenarios. Here is a scenario where it does not work. Therefore, there can be no algorithm.

    Interestingly, a seemingly significantly weaker form of the problem is also insoluble: Replace condition 3 with: 3a. Weak Validity: If i1=i2=1 and no message is lost, then d1=d2=1.

    The proof of this is slightly more complicated, but shows that the problem is quite insoluble.

  12. Cha0s says:

    (forgot to check notify)

  13. Tim McCormack says:

    Cha0s: Ha, I think I see where my misunderstanding lies! I thought the problem was saying "an algorithm that sometimes allows consensus under these circumstances cannot exist", but what it is really saying (I think) is "there is no algorithm that will always guarantee consensus."


  14. scuroNok says:

    Silence has come :) Does anybody have something to say?

  15. boarders' paradise says:

    yes, me. But I'm so busy at the moment, I hope I'll get 'round it soon.

    In the meanwhile: my text in posting 7 has been completely messed up (I know I posted it right). Tim, can you fix that?

  16. Wireless MAC protocols « Wireless Sensor Networks, Spring 2011 says:

    [...] Here’s a link to the original paper on the limits of distributed computing which introduced a version of what has since become known as the Two General’s Problem. Some of the subtleties of this classic paradox are explored here. [...]

Commenting is not yet reimplemented after the Wordpress migration, sorry! For now, you can email me and I can manually add comments.