« Conscious Control | Main | Share likelihood ratios, not posterior beliefs »

February 05, 2009

Comments

            MISSION COMPLETE

Ships Ships Planets Planets
Destroyed Lost Destroyed Lost
Humans 0 1 0 1
Babyeaters 0 1 0 0
Superhappies 1 0 0 0

over the last parts the pace is too fast, it feels rushed. This leads to a loss in quality of the fiction, imo. Besides, it glosses over some holes in the story, such as: why would akon keep his word under these circumstances?
why would the happies not foresee the detonation of huygens? why 3 hours to evacuate...?

Cannibal, what exactly is your point and aren't you forgetting all the Babyeater casualties we'd expect in the next week?

The point is, that the Normal Ending is the most probable one.

If blowing up Huygens could be effective, why did it even occur to you to blow up Earth before you thought of this?

Hmm. I think I'd rather have agreed to the Superhappies' deal.

One reason is that with their rate of expansion -- which they might be motivated to increase now, too -- they'll probably surprisingly soon find an alternative starline route to humans anyway. (Though even if this was guaranteed not to happen, I probably still would rather have agreed to the deal.)

Also, I think I would prefer blowing up the nova instead. The babyeater's children's suffering is unfortunate no doubt but hey, I spend money on ice cream instead of saving starving children in Africa. The superhappies' degrading of their own, more important, civilization is another consideration.

(you may correctly protest about the ineffectiveness of aid - but would you really avoid ice cream to spend on aid, if it were effective and somehow they weren't saved already?)

Shutting up and multiplying suggests that we should neglect all effects except those on the exponentially more powerful species.

Cannibal: Heh.

Spuckblase: You know, you're right. I revised/returned some paragraphs that were deleted earlier, starting after "...called that thinking sane."

Simon: It just didn't happen to cross my mind; as soon as I actually generated the option to be evaluated, I realized its superiority.

Steven: I thought of that, but decided not to write up the resulting conversation about Babyeater populations versus Babyeater expansion rates versus humans etcetera, mostly because we then get into the issue of "What if we make a firm commitment to expand even faster?" The Superhappies can expand very quickly in principle, but it's not clear that they're doing so - human society could also choose a much higher exponential on its population growth, with automated nannies and creches.

Aleksei: Part of the background here (established in the opening paragraghs of chapter 1) is that the starline network is connected in some way totally unrelated to space as we know it - no star ever found is within reach of Earth's telescopes (or the telescopes of other colonies). Once the Huygens starline is destroyed, it's exceedingly unlikely that any human will find the Babyeaters or the Superhappies or any star within reach of any of their stars' telescopes, ever again. Of course, this says nothing of other alien species - if anyone ever again dared to follow a nova line.

Vassar pointed out that there's a problem if you have exponential expansion and any significant starting density; I'd thought of this, but decided to let it be their Fermi Paradox - maybe any sufficiently advanced civilization discovers that it can traverse something other than starlines to go Somewhere Else where intelligence is a set of measure zero, and actually traversing starlines is dangerous because of who might learn about your existence and e.g. threaten to blackmail you.

The Superhappies are not much intimate with deception as opposed to withholding information, and their ability to model other minds they can't have sex with seems to be weak - no matter what their thinking speed. This was shown in chapter 5, when the Superhappies - who don't lie, remember - concluded their conversation with "We hope you do not mind waiting."

The Superhappies don't seem to have the same attachment to their current personalities, or the same attachment to individual free will, as a human does; so far as they can understand it, they're just trading utilons with us and offering us a good deal on the transaction. Akon himself believed what he was telling them, which defeats many potential methods of lie detection.

In general, the Superhappies seem to lack numerous human complications such as status quo bias (keeping your current self intact), or preferences for particular rituals of decision (such as individual choice). The resulting gap between their decision processes and ours is not lightly crossed by their mostly sexual empathy, and it's not as if they can simulate us on the neural level from scratch.

Several commenters earlier asked whether the Superhappies "defected" by firing on the Babyeater ship. From the Superhappy standpoint, they had already offered the Babyeater ship the categorically cooperative option of utility function compromise; in refusing that bargain, the Babyeaters had already defected.

Three hours and forty-one minutes simply happens to be how long it takes to blow up a Huygens-sized star.

The assumption of the True Ending is that the Superhappies were (a) not sure if destroying the apparently cooperative Impossible would encourage Huygens to blow itself up if the Impossible failed to return; and (b) did not have any forces in range to secure Huygens in time, bearing in mind that the Babyeaters were a higher priority. The Normal Ending might have played out differently.

The Superhappies can expand very quickly in principle, but it's not clear that they're doing so

We (or "they" rather; I can't identify with your fanatically masochist humans) should have made that part of the deal, then. Also, exponential growth quickly swamps any reasonable probability penalty.

I'm probably missing something but like others I don't get why the SHs implemented part of BE morality if negotiations failed.

The point is, that the Normal Ending is the most probable one.

Historically, humans have not typically surrendered to genocidal conquerors without an attempt to fight back, even when resistance is hopeless, let alone when (as here) there is hope. No, I think this is the true ending.

Nitpick: eight hours to evacuate a planet? I think not, no matter how many ships you can call. Of course the point is to illustrate a "shut up and multiply" dilemma; I'm inclined to think both horns of the dilemma are sharper if you change it to eight days.

But overall a good ending to a good story, and a rare case where a plot is wrapped up by the characters showing the spark of intelligence. Nicely done!

So what is next? 7/8 implies a next part, yet it also seems to be finished.

Steven: They're being nice. That's sort of the whole premise of the Superhappies - they're as nice as an alien species can possibly get and still be utterly alien and irrevocably opposed to human values. So nice, in fact, that some of my readers find themselves agreeing with their arguments. I do wonder how that's going to turn out in real life.

Russell: It's stated that most colony worlds are one step away from Earth (to minimize the total size of the human network). This means there's going to be a hell of a lot of ships passing through Earth space (fortunately, space tends to be pretty large).

If you can get anyone at all from Huygens to the starline in 3.6 hours, then from the starline to Huygens and back is at most 7.2 hours. We assume that transit through Earth space is so dense that there are already many ships in close proximity to the Huygens starline. These speeds imply some kind of drive that doesn't use inertial reaction mass, so it's also safe to assume that many ships can enter atmosphere.

If they could call in ships from Earth, they could blanket the planet. So, yes, eight hours to evacuate. Eight days would make it practically certain the Superhappies would show up.

Russell Wallace,

The good fable is something different. The most probable outcome (specially here) is another story. The non discussed advantages Superhappies have accumulated so far and are accumulated also at this very moment - are crucial.

Isn't this a win-win? The babyeaters get saved too, by the superhappies, who were not cut off from the babyeater starline. The only losers are the superhappies, who can't "save" the humans.

Julian,

And possibly billions of Huygens humans. Don't forget those.

It seems like an easy solution would be to just inform the superhappies a little more about oral sex (how humans "eat" their young). They could make a few tweaks, and we'd lose the least (some guys might consider that an improvement).

Anyone else thing the Superhappies sounded a whole lot like the borg?

Something like this possibility occurred to me, but I don't think this actually is better.

At least, I think I'd have to be walked through the reasoning, since right now I _THINK_ I'd prefer Last Tears to Sacrificial Fire, conditioned on, well, the conditions I list in this comment holding.

ie, giving up pain/suffering in a _non_ wireheading way, and being altered to want to eat _non-and-never-have-been-conscious_ "pre humans" really doesn't seem all that bad to me, compared to the combined costs of defecting in single iteration PD (again, same ole metarationality arguments hold, especially when we imagine future different species we may encounter) + slaughter of everyone unable to escape from Huygens in time + (and I'm thinking this part is really important) missing out on all the wonderful complexities of interaction with the updated versions of the Babyeaters and the Superhappies.

Heck, the giving up pain/suffering thing in a non wireheading or otherwise-screw-us-up may actually be a right and proper thing. I'm not _entirely_ sure on this, though. As I've said elsewhere, main reason I'd want to keep it is simply a vague notion of not wanting to give up, well, any capability I have currently + want to be able to retain the ability to at least decode/comprehend my old memories.

I mean, I guess I get the idea of "our values are, well, our values, so according to them, maintaining those values is right", but... it sure seems to me that those very same values are telling me "Last Tears" has a higher preference ranking than this does.

Eliezer:
How can the superhappies consider their offer fair if they made it up and accept it, and the babyeaters reject it? Why do they think that their payment to the babyeaters is in any way adequate?

It seems to me that they would have to at least ramp up the payment to/costs for the babyeaters, until there was an offer the babyeaters would accept, even if the superhappies would reject it. Then there are points to negotiate from.

But just to make an offer that you predict the other side will reject, and then blow them up? The babyeaters were nicer.

I agree with the President of Huygens; the Babyeaters seem much nicer than the Lotuseaters. Maybe that's just because they don't physically have the ability to impose their values on us, though.

Strange this siding with Babyeaters here ... strange.

I prefer the ending where we ally ourselves with the babyeaters to destroy the superhappies. We realize that we have more in common with the babyeaters, since they have notions of honor and justified suffering and whatnot, and encourage the babyeaters to regard the superhappies as flawed. The babyeaters will gladly sacrifice themselves blowing up entire star systems controlled by the superhappies to wipe them out of existence due to their inherently flawed nature. Then we slap all of the human bleeding-hearts that worry about babyeater children, we come up with a nicer name for the babyeaters, and they (hopefully) learn to live with the fact that we're a valuable ally that prefers not to eat babies but could probably be persuaded given time.

P.S. anyone else find it ironic that this blog has measures in place to prevent robots from posting comments?

Personally, I side with the Hamburgereaters. It's just that the Babyeaters are at the very least sympathetic, I can see viewing them as people. As they've said, the Babyeaters even make art!

The remarkable thing about this story is the conflicting responses in the stories. The fact that a relatively homogeneous group of humans can have totally different intuitions about which ending is better and which aliens they prefer, to me, means that actually aliens (or an AI, whatever) have the potential to be, well- alien, far in excess of what is described in this story. Both aliens have value systems which, while different from ours are almost entirely comprehensible. I think we might be vastly underestimating how radically alien aliens could be.

anyone else find it ironic that this blog has measures in place to prevent robots from posting comments?

Only for those stupid robots who can't read a few funny written letters. Babyeaters level robots can't talk here.

And now there will be many cults trying as hard as they can to make contact with the superhappies.

I say this because I witnessed many people discussing Brave New World as an actual utopia... Humans can have incompatible values too.

Eliezer, I hope you'll consider expanding this story into a novel. I'd buy it.

I wonder, do the people preferring the Babyeaters over the Superhappies, remember that as a necessary consequence of Babyeater values nearly half* of their species is, at any given time, dying in severe pain?

*From part 2, ~10 children eaten/year/adult, ~1 month for digestion to complete.

Psy-Kosh: I don't see the final situation as a prisoner's dilemma - by destroying Huygens, humanity shows a preference for mutual "defection" over mutual "cooperation".

simon: err... descriptive, normative... ? Maybe you genuinely value ice cream over saving lives, but your behavior isn't a justificatory argument for this, or, given akrasia, even strong evidence.

Nick,

Behavior isn't an argument (except when it is), but it is evidence. And it's akrasia when you say, "Man, I really think spending this money on saving lives is the right thing to do, but I just can't stop buying ice cream" - not when you say "buying ice cream is the right thing to do". Even if you are correct in your disagreement with Simon about the value of ice cream, that would be a case of Simon being mistaken about the good, not a case of Simon suffering from akrasia. And I think it's pretty clear from context that Simon believes he values ice cream more.

And it sounds like that first statement is an attempt to invoke the naturalistic fallacy fallacy. Was that it?

It's evidence of my values which are evidence of typical human values. Also, I invite other people to really think if they are so different.

Eliezer tries to derive his morality from human values, rather than simply assuming that it is an objective morality, or asserting it as an arbitrary personal choice. It can therefore be undermined in principle by evidence of actual human values.

Also, I'm not at all confident that compromising with the Superhappies would be very bad, even before considering the probably larger benefit of them becoming more like us. I think I'd complain more about the abruptness and exogenousness of the change than the actual undesirability of the end state. As others have pointed out, though, a policy of compromise would lead to dilution of everyone's values into oblivion, and so may be highly undesirable.

More generally and importantly, though, I wonder if the use of wireheading as a standard example of "the hidden complexity of wishes" and of FAI philosophical failure (and it is an excellent example) leads me and/or other Singularitarians to have too negative a reaction to, well, anything that sounds like wireheading, including eliminating pain.

If the Super-Happies were going to turn us into orgasmium, I could see blowing up Huygens. Nor would it necessarily take such an extreme case to convince me to take that extreme measure. But this . . . ?

"Our own two species," the Lady 3rd said, "which desire this change of the Babyeaters, will compensate them by adopting Babyeater values, making our own civilization of greater utility in their sight: we will both change to spawn additional infants, and eat most of them at almost the last stage before they become sentient."

...

"It is nonetheless probable," continued the Lady 3rd, "that the Babyeaters will not accept this change as it stands; it will be necessary to impose these changes by force. As for you, humankind, we hope you will be more reasonable. But both your species, and the Babyeaters, must relinquish bodily pain, embarrassment, and romantic troubles. In exchange, we will change our own values in the direction of yours. We are willing to change to desire pleasure obtained in more complex ways, so long as the total amount of our pleasure does not significantly decrease. We will learn to create art you find pleasing. We will acquire a sense of humor, though we will not lie. From the perspective of humankind and the Babyeaters, our civilization will obtain much utility in your sight, which it did not previously possess. This is the compensation we offer you. We furthermore request that you accept from us the gift of untranslatable 2, which we believe will enhance, on its own terms, the value that you name 'love'. This will also enable our kinds to have sex using mechanical aids, which we greatly desire. At the end of this procedure, all three species will satisfice each other's values and possess great common ground, upon which we may create a civilization together."

Sure, I would turn this down if it were simply offered as a gift. But I really, really, cannot see preferring the death of fifteen billion people over it. Although I value the things that the Super-Happies would take away, and I even value valuing them, I don't value valuing them all that much. Or, if I do, it is very far from intuitively obvious to me. And the more I think about it, the less likely it seems.

I hope that Part 8 somehow makes this ending seem more like the "right" one. Maybe it will be made clear that the Super-Happies couldn't deliver on their offer without imposing significant hidden downsides. It wouldn't stretch plausibility too much if such downsides were hidden even from them. They are portrayed as not really getting how we work. As I said in this comment to Part 3, we might expect that they would screw us up in ways that they don't anticipate.

But unless some argument is made that their offer was much worse than it seemed at first, I can't help but conclude that the crew made a colossal mistake by destroying Huygens, to understate the matter.

Simon: "Eliezer tries to derive his morality from human values"

I would correct the above to "Eliezer tries to derive his morality from stated human values."

That's where many of his errors come from. Everyone is a selfish bastard. But Eliezer cannot bring himself to believe it, and a good fraction of the sorts of people whose opinions get taken seriously can't bring themselves to admit it.

Tyrrell: Agreed. As I said in what, well, I said, my acceptance of the SuperHappy bargain was conditional in part on, well, the change being engineered in such a way that it doesn't make the rest of our cognitive structure, values, etc go kablewey. But, given that the changes are as advertised, and there aren't hidden surprises of the "if I really thought through where this would lead, I'd see this is very very bad" variety, well, sure seems to me that the choice in this ending is the _wrong_ one.

Nick: And to we really want, in general, defection to be the norm? ie, when we next meet up with a different species? ie, by the same ole metarationality arguments (ie, blah blah, it's not us causing their behavior, but common cause leading to both, our choices arise from algorithms/causality, yada yada yada yada) it would seem that now humanity ought to expect there to be more PD defectors in the universe than previously thought. I think...

This would be a bad thing.

I'm not sure, but was this line:

But, from the first species, we learned a fact which this ship can use to shut down the Earth starline

supposed to read "the Huygens starline"?

Sure, I would turn this down if it were simply offered as a gift. But I really, really, cannot see preferring the death of fifteen billion people over it.

How many humans are there not on Huygens?

Psy-Kosh: Yeah, I meant to have a "as Psy-Kosh has pointed out" line in there somewhere, but it got deleted accidentally while editing.

ad:

How many humans are there not on Huygens?

I'm pretty sure that it wouldn't matter to me. I generally find on reflection that, with respect to my values, doing bad act A to two people is less than twice as bad as doing A to one person. Moreover, I suspect that, in many cases, the badness of doing A to n people converges to a finite value as n goes to infinity. Thus, it is possible that doing some other act B is worse than doing A to arbitrarily many people. At this time, I believe that this is the case when A = "allow the Super-Happies to re-shape a human" and B = "kill fifteen billion people".

Oh, I'm starting to see why the Superhappies are not so right after all, what they lack, why they are alien, in the Normal Ending and in Eliezer's comments. I think this should have been explained in more detail in the story, because I initially failed to see their offer as anything but good, let alone bad enough to kill yourself. I want untranslatable 2! Still, if I had been able to decide on behalf of humanity, I would have tried to make a deal - not outright accepted their offer, but negotiated to keep more of what matters to us, maybe by adopting more of their emotions, or asking lesser modifications of them. It just doesn't look *that* irreconciliable. Also, their offer to have the Babyeaters eat nonsentient children sounds stupid - like replacing out friends and lovers with catgirls.

Julian Morrison:
The only losers are the superhappies, who can't "save" the humans.

You are ignoring the human children.

As the superhappies pointed out, they are in a comparable situation as the babyeater children - suffering before having internalized a philosophy that makes it okay, only because the adults want them to. (Which was the whole reason why the superhappies wanted to intervene.)

I think that this is the "right" ending in the sense that I think it's the kind of thing that typical present-day non-singularitarian humans would do: Be so afraid of being altered that they would consign a large number of their own kind to death rather than face alteration (correct or incorrect, this is the same kind of thinking you see in resistance to life extension and various other H+ initiatives). I'm not confident that it's what rational humans should do.

Small changes in the story could make me get off the fence in either direction. If the deathtoll for avoiding the Superhappy-Babyeater-Human Weirdtopia was negligible and Huygens could completely evacuate, then I would support blowing it up. Alternatively, if the Supperhappy proposal was stripped of Babyeater values, especially if a slightly better compromise between human and Supperhappy values was possible, then I would not support blowing up Huygens.

I think the Superhappy proposal was bad, but as Tyrrell McAllister said, I'm not sure it was so bad as to justify killing 15 billion people. And most of the problem with the Superhappy proposal was actually due to the Babyeater values that the Superhappies wanted to introduce, not the Superhappy values they wanted to introduce. I really can't see Babyeaters and humans ever compromising, but I can see Superhappies and humans compromising.

I think if humans had run into the Superhappies alone, or had persuaded them not to force Babyeater values on us, then a mutually acceptable deal with the Superhappies could have been worked out (for instance, what if they left the warning component of pain, but made it less painful?). The Superhappies and humans should've gotten together, found a compromise or union of our values, then imposed those values on the Babyeaters (who's values are more repugnant to us than the Superhappies, and more repugnant to the Superhappies than ours).

To again agree with Tyrrell, if the story had been written such that the Superhappies wanted to do something more drastic and dehumanizing than eliminate "bodily pain, embarrassment, and romantic troubles," such as turn us into orgasmium, then I would see a much bigger problem with cooperating with them. But, they aren't, and what they are taking away would alter our humanity, but not destroy it. They aren't trying to remove complex positive experiences, only negative ones; they aren't trying to remove humor or art. They do want to have sex with humans, but this is merely weird, not catastrophic, and might even be more acceptable to humans in this story due to their, uh, different attitudes towards sex than ours.

Minus the Babyeater values, the Superhappy deal would merely lead to a Weirdtopia that doesn't sound all that bad as far as Weirdtopias go, unless there's something I'm missing (and I think many humans would think it was great). The Superhappy-Human Weirdtopia doesn't seem bad enough to justify killing 15 billion people. Maybe I just have different intuitions.

Eliezer, thanks. I mostly read OB for the bias posts and don't enjoy narratives or stories, but this one was excellent.

Tyrrell, we aren't told how many humans exist. There could be 15 trillion, so the death of one system may not even equal the number of people who would commit suicide if the SHs had their way.

I don't find the SHs to be "nice" in any sense of the word. In my reading, they aren't interested in making humans happy. They can't be - they don't even understand the human brain. I think they are a biological version of Eliezer's smiley face maximizers. They are offended by mankind's expression of pain (its a negative externality to them) and want to remove what offends them. I don't think any interstellar civilization would be very successful if they did not learn to ignore or deal with non-physical negative externalities from other races (which would, unfortunately, include baby-eating).

The SH did not even seem to consider the most obvious option (to me, at least) which is to trade and exchange cultures the normal way. Many humans would undoubtedly be drawn in to the SH way of life. I suppose their advanced technology makes the cost of using force relatively low, so this option seemed unacceptable. Still, I wonder why Akon didn't propose it (or did he)?

I've enjoyed the story very much so far, Mr. Yudkowsky.

Incidentally, and fairly off-topic, there's a "hard" sci-fi roleplaying game that uses an idea similar to the starlines in this story. It can be found here:

http://phreeow.net/wiki/tiki-index.php?page=Diaspora

Come to think of it, I have no idea if there's //anyone// with an interest in roleplaying games is this forum...if there is, have fun!

Patrick (orthonormal), I'm fairly sure that "Earth" is correct. They haven't admitted that what they're going to do is blow up Huygens (though of course the President guesses), and the essential thing about what they're doing is that it stops the aliens getting to Earth (and therefore to the rest of humanity). And when talking to someone *in the Huygens system*, talk of "the Huygens starline" wouldn't make much sense; we know that there are at least two starlines with endpoints at Huygens.

Eliezer, did you really mean to have the "multiplication factor" go from 1.5 to 1.2 rather than to something bigger than 1.5?

(Second attempt at posting this. My first attempt vanished into the void. Apologies if this ends up being a near-duplicate.)

Patrick (orthonormal), I'm pretty sure "Earth" is right. If you're in the Huygens system already, you wouldn't talk about "the Huygens starline". And the key point of what they're going to do is to keep the Superhappies from reaching Earth; cutting off the Earth/Huygens starline irrevocably is what really matters, and it's just too bad that they can't do it without destroying Huygens. (Well, maybe keeping the Superhappies from finding out any more about the human race is important too.)

Are bodily pain and embarrassment really that important? I'm rather fond of romantic troubles, but that seems like the sort of thing that could be negotiated with the superhappies by comparing it to their empathic pain. It also seems like the sort of thing that could just be routed around, by removing our capacity to fall out of love and our preference for monogamy and heterosexuality.

Grant:
I don't find the SHs to be "nice" in any sense of the word. ... They are offended by mankind's expression of pain (its a negative externality to them) and want to remove what offends them.

I'm not entirely sure how "they are offended by helpless victims being forced to suffer against their will and want to remove that" translates into "the SHs aren't nice in any sense of the word".

Manon, thanks for pointing that out - I'd left that out of my analysis entirely. I too would like untranslatable 2. It doesn't change my answer though, as it turns out.

if the SHs find humans via another colony world blowing up earth is still an option.
I don't believe the SHs could have been bargained with. They showed no inclination towards compromise in any other sense than whichever one they have calculated as optimal based on their understanding of humans and babyeaters. Because the SHs don't seem to value the freedom to make sub-optimal choices (free will) they may also worry much less about making incorrect choices based on imperfect information (this is the only rational reason I can come up with for them wanting to make a snap decision when a flaw in their data could lead to more of what they don't want: suffering). It is probably the norm for SHs to make snap decisions based on all available data rather than take no action while waiting for more data.
They must have had a weird scientific revolution.

Kaj,

I'm not entirely sure how "they are offended by helpless victims being forced to suffer against their will and want to remove that" translates into "the SHs aren't nice in any sense of the word".
They aren't offended by suffering, but the expression of it. They don't even understand human brains, and can't exchange experiences with them via sex, so how could they? Maybe the SHs are able to survive and thrive without processing certain stimuli as being undesirable, but they never made an argument that humans could.

Psy-Kosh: I understand the metarationality arguments; my point is that we didn't defect in a prisoner's dilemma. PD requires C/C to be preferable to D/D; but if destroying Huygens is defecting for humans, that can only be the case (under the story's values) if cooperating for Superhappies involves modifying themselves and/or giving us their tech without us being modified. I don't think that was ever on the table. (BTW, I liked your explanation of why the deal isn't so bad.)

Simon: Eliezer tries to derive his morality from human values... Common mistake; see No License to Be Human.

Thom: What do you mean by "naturalistic fallacy fallacy"? Google reveals several usages, none of which seem to fit. Also, regardless of Simon's actual values, it seemed to me he treated the statements "I buy ice cream instead of helping starving children" and "I value ice cream over helping starving children" as identical; this is a fallacy that I happen to find particularly annoying.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31