Due to a lack of time with Easter and all, I’m not very confident of my answer to this week’s Riddler, but I’ll let you decide. Here’s the riddle:

Suppose a casino invents a new game that you must pay $250 to play. The game works like this: The casino draws random numbers between 0 and 1, from a uniform distribution. It adds them together until their sum is greater than 1, at which time it stops drawing new numbers. You get a payout of $100 each time a new number is drawn.

For example, suppose the casino draws 0.4 and then 0.7. Since the sum is greater than 1, it will stop after these two draws, and you receive $200. If instead it draws 0.2, 0.3, 0.3, and then 0.6, it will stop after the fourth draw and you will receive $400. Given the $250 entrance fee, should you play the game?

Specifically,

what is the expected valueof your winnings?

Okay, when I read that part it seemed like this was going to be a no-brainer, but then Ollie added:

[

Editor’s note: You know how sometimes your boss/teacher asks you not to bring your laptop to a meeting/class, and you’re all like, “Jeez, how am I gonna tweet during the meeting/class, then?” Well, I’m going to pull rank and ask that you don’t bring laptops to this Riddler. Next week I will highlight — nay, celebrate! — the elegant, pencil-and-paper solutions over the brutal, cold, silicon ones. We’re all on the honor system with one another, right?RIGHT?]

No computers! Pencil and paper? Egads. Okay, so I did a little bit of working this out and I came up with an answer I thought might be right. First of all, I assumed that you are guaranteed to see at least 2 numbers, because with all the real numbers between 0 and 1, the chances of getting 1 exactly are one over infinity, or zero. So the minimum amount you are going to lose in this game is $50. And it seems to me that the chances of seeing a third number are 50%, since the average value of the first number will be 0.5 and the chances that the second number will be greater than 0.5 is… 0.5, or 50%.

Okay, so right away we see that there’s a 50% chance we lose $50 (-$250 + $200), and a 50% chance that we win $50 (-$250 + $300), with the possibility of more money to come. This is a wonderful casino game; I would very much like to find the casino that offers it. A game like this might explain how somebody could bankrupt a casino. (Sad!)

So we have a 50% chance of seeing only 2 numbers and losing $50. What are the chances of seeing only 3 numbers? Well, I figure if 2 random numbers are below 1, then there’s a 2/3 chance that the 3rd number will break the ceiling. So 2/3 of what is remaining (1/2) is 1/3. So 1/2 the time we see 2 and only 2 numbers and 1/3 of the time we see 3 and only 3 numbers, and now we have 1/6 (1/2 – 1/3) remaining.

Chance of only 2 numbers: 1/2. 1 – **1/2** = 1/2 remaining.

* 2/3 =

Chance of only 3 numbers: 1/3. 1/2 – **1/3** = 1/6 remaining.

* 3/4 =

Chance of only 4 numbers: 1/6 * 3/4 = **1/8**. 1/6 – 1/8 = 1/24 remaining.

* 4/5 =

Chance of only 5 numbers: 1/24 * 4/5 = **1/30**. 1/24 – 1/30 = 1/120 remaining.

* 5/6 =

Chance of only 6 numbers: 1/120 * 5/6 = **1/144**.

And so on.

So to get the Expected Value (EV) we can take the odds of each payout * its value and get:

$-50/2 + $50/3 + $150/8 + $250/30 + $350/144 = $21.18

Plus some increasingly smaller numbers for seeing 7, 8, 9, 10? numbers.

Ok, now that we got our EV answer, let’s cheat! (If you don’t want to see whether a computer confirms or disproves my EV answer, stop reading here.)

I wrote some code in Go (are you sick of Go yet, because I am not.)

package main import ( "fmt" "math/rand" ) func doFloatTrial() int { total := 0.0 numbers := 0 for { numbers++ total += rand.Float64() if total >= 1.0 { break; } } return numbers } func main() { trials := 1000000 var results []int cash := 0.0 for i := 0; i < trials; i++ { cash -= 250.0 numbers := doFloatTrial() cash += 100.0 * float64(numbers) for a := len(results); a <= numbers; a++ { results = append(results, 0) } results[numbers]++ } fmt.Printf("EV = %.3f\n", cash / float64(trials)); for i := 0; i < len(results); i++ { fmt.Printf("Results[%d] = %d (%.2f) \n", i, results[i], float64(trials)/float64(results[i])) } }

And here’s the output:

```
EV = 21.826
Results[0] = 0 (+Inf)
Results[1] = 0 (+Inf)
Results[2] = 499604 (2.00)
Results[3] = 334041 (2.99)
Results[4] = 124717 (8.02)
Results[5] = 33341 (29.99)
Results[6] = 6939 (144.11)
Results[7] = 1157 (864.30)
Results[8] = 184 (5434.78)
Results[9] = 15 (66666.67)
Results[10] = 2 (500000.00)
```

Hurray! Our EV from the code is $21.83 and we can see that the odds of drawing each count of numbers is (with variance) what we had calculated by hand. Math! Follow this link to run the code yourself in your browser.

Thanks for reading. I’m not 100% sure of this one so I look forward to better approaches to this problem in the comments.

]]>

This week’s Riddler concerns talking to strange men in trench coats.

A man in a trench coat approaches you and pulls an envelope from his pocket. He tells you that it contains a sum of money in bills, anywhere from $1 up to $1,000. He says that if you can guess the exact amount, you can keep the money. After each of your guesses he will tell you if your guess is too high, or too low. But! You only get nine tries.

What should your first guess be to maximize your expected winnings?

Obviously if a man in a trench coat offers you money the answer is to run. But let’s answer the question anyway. My immediate first guess for this was $744, which was wrong. But only because I’m pretty sure the answer is $745.

How did I get this number? Well, the guessing game where someone tells you whether your guess is higher or lower is a classic example of a binary search, which is one of the most important algorithms in computer science. With 9 guesses, we know that we can guarantee coverage of 511 (2^9 – 1) values. That leaves 489 values for which we are going to not be able to guess the right answer. Sad!

Now the question is, which values do we want to cover? Well, without any knowledge to the contrary we should assume that all values between $1 and $1000 are equally likely, assuming our trench-coated friend is trustworthy (and aren’t they all?). So since there is just as much chance that there is $1000 in the envelope as $1, we might as well make sure we cover the top 511 values.

Put another way, imagine the man in the trench coat offers you only one guess. What should you do? (Run.) Well, you have just as much chance of being right by guessing $1000 as $1, so guess $1000 and hope for the best. Your expected value (EV) is the odds of being right times the payout, so if you guess $1000, your EV is $1. If you guess $1, your EV is $.001, which is… less.

So covering the top 511 values would be everything from $490 to $1000. The middle value between those two numbers is $745. I wrote some code to show this, and then ran 1000 simulations for each amount of money that is in the envelope. Here it is in Go:

package main // Jon Wiesman, somedisagree.com import "fmt" func binSearchRecursive(min, max, value, guesses int) bool { if guesses == 0 { return false } if min > max { return false } guess := (min + max) / 2 if guess == value { return true } if guess < value { return binSearchRecursive(guess+1, max, value, guesses-1) } else { return binSearchRecursive(min, guess-1, value, guesses-1) } return false } func binSearchIterative(min, max, value, guesses int) bool { for ; guesses > 0; guesses-- { if min > max { return false } guess := (min + max) / 2 if guess == value { return true } if guess < value { min = guess + 1 } else { max = guess - 1 } } return false } func DoTrialRecursively(min, max int) { trials := 1000 found := 0 totalEarned := 0 for i := 1; i <= 1000; i++ { if binSearchRecursive(min, max, i, 9) { found++ totalEarned += i } } fmt.Printf("Recursively, with first value = %d, %d/%d success, total earned = %d\n", (min+max)/2, found, trials, totalEarned) } func DoTrialIteratively(min, max int) { trials := 1000 found := 0 totalEarned := 0 for i := 1; i <= 1000; i++ { if binSearchIterative(min, max, i, 9) { found++ totalEarned += i } } fmt.Printf("Iteratively, with first value = %d, %d/%d success, total earned = %d\n", (min+max)/2, found, trials, totalEarned) } func main() { DoTrialIteratively(489, 1000) DoTrialIteratively(490, 1000) DoTrialRecursively(490, 1000) DoTrialRecursively(491, 1000) }

And here’s a link to the Go playground for this code where you can play with the numbers and run it in your browser. Doing so gives this output:

Iteratively, with first value = 744, 511/1000 success, total earned = 380184 Iteratively, with first value = 745, 511/1000 success, total earned = 380695 Recursively, with first value = 745, 511/1000 success, total earned = 380695 Recursively, with first value = 745, 510/1000 success, total earned = 380205

So your EV if you start with a first guess of $745 is $380.695, and you’ve got a 51.1% chance of making some money. Pretty good deal. (You should still run.)

My binary search function differs from most binary searches in one small way. Here, I pass in the number of guesses and decrement it with each recursive call. When it reaches 0, I return false. I used recursion here first because it’s easiest for me to write but iteration might be easier for non CS people to follow so I wrote it iteratively as well. Both work.

If you call either function with min and max at $1 and $1000, respectively, giving a first guess of $500, you will still succeed 511 times but your expected value would only be $255.175. This is because the 489 missed values would be evenly distributed instead of all at the bottom. Try it.

So, that’s my answer: $745. I believe it is right unless there is some method to guarantee more than 511 found values with only 9 guesses. As always, I’m always willing to admit I am wrong so if I am, I’d love to know.

Thanks for reading.

]]>

As I said last week, a lot of people really hate whiteboard interviews, and they have some valid reasons for not liking them. (My thoughts are kind of like the Winston Churchill quote: “Democracy is the worst form of government except for all those others that have been tried from time to time.”) These reasons include, but are not limited to:

- Nobody writes code on a whiteboard so why should we evaluate people on a skill that they never do?
- Writing code on a whiteboard makes people nervous and more likely to make mistakes, so it’s a terrible way to evaluate people.
- Whiteboard interviews favor confident people with good presentation skills, not necessarily good coders.
- There are all sorts of posts, and even entire books, written about how to do well on whiteboard interview questions.
- Whiteboard interviews are prone to bias, rewarding candidates who resemble the interviewer instead of candidates who would perform well in the job.

Okay, so I don’t necessarily disagree with any of those points, but the thing is, a lot of those statements are also true for just about *any* type of interview. If you are interacting with the candidate in any way, candidates who are confident and poised will tend to do better than other candidates. That has nothing to do with a whiteboard. Interviewers are prone to bias regardless of whether they are reading a resume, or reviewing an application, or having a one-on-one conversation, or conducting a whiteboard interview. To be sure, interviewers need to guard against all those things, but I don’t think they are more or less associated with whiteboard exercises.

I should also say that I myself have had some really embarrassing experiences with whiteboard interviews as a candidate. I have completely bombed some interviews. It can happen to anyone, or at least anyone only as skilled as myself. This isn’t a case of me thinking, well, I am good at this thing so I’m looking for other people who are good at this thing. But I think that experience has given me some insight as to what to look for and a little bit of empathy.

The strongest argument against whiteboards would be that nobody writes code on a whiteboard, and this is absolutely true. Some companies prefer to give candidates a problem to be worked on at home; some have candidates do a joint coding exercise with potential teammates. Blizzard Entertainment has candidates do a joint coding project with *other candidates*, which I feel is just crazy, but you can’t argue with their results, I guess. My concern with take-home exercises or joint coding exercises is that you’re kind of putting all your eggs in one basket. When I give a whiteboard exercise, I’m not the only one doing so. We have several people giving them. It’s possible that someone could bomb my portion and do well enough in another session to get an offer. With a coding project, you’re probably getting one (larger, yes, but still solitary) piece of data to make a judgement on.

At Carbine we did sort of a pseudo-whiteboard exercise. Instead of a whiteboard, we had candidates type into a plain text editor (Notepad++) that was projected onto a screen that everyone could see. That solved the problem of bad penmanship but I think it had other drawbacks. For one, I can’t do my awesomely terrible curly brace squiggles in a text editor, and that’s actually something I want to do to try to make the candidate more comfortable. More importantly, the presence of an actual text editor seems to make candidates pay even more attention to things I don’t care about like making sure their brackets line up or semi-colons. Also, you can’t draw in a text editor.

The great thing about whiteboards is that if you want to draw a Venn diagram, you just… draw a Venn diagram. If you want to outline a tree structure, draw a tree structure. Finally, and this might just be my perception, people who are typing don’t seem as talkative as people who are using a whiteboard. Something about typing seems to focus more attention on that task, whereas drawing on a whiteboard seems to allow people to still talk. And I am mostly interested in what they are saying as they draw diagrams and write sloppy code on the whiteboard. So, yeah, I use the whiteboard.

One other reason for continuing to use whiteboards will sound really terrible, an appeal to authority or perhaps a bit of cargo cult logic: Google and Amazon still use them. Now, doing something just because someone else does it is not a great reason, but I know a couple things about Google: one, they are obsessive about measuring things. And two, they used to ask “lateral thinking” type questions and they *stopped* because their metrics showed that there was no correlation between those types of questions and whether someone was a good employee or not.

(An example of a lateral thinking question: “A man rides an elevator down from his 10th floor apartment every day. When he comes home from work, he rides up to the 5th floor and walks the remaining five stories to his apartment, unless it has been raining or there are other people in the elevator, in which case he rides all the way. Explain why.” Don’t ask that question; it won’t help you hire good engineers.)

So Google’s continued use of whiteboards suggest to me that they still have some value, but the real reason I use them is because my own experience has led me to trust them. I find value in them, because I’ve had good results. If they don’t work for you, then don’t use them on my account. The important thing is that you have a set of questions you want to answer about the candidate and a method of getting those answers. Whether that is whiteboard questions, take home projects, joint coding exercises, or whatever works for you, as long as you are comfortable with that process and you are getting good results, that’s what you should use.

So those are my basic thoughts about whiteboards and why I still use them, despite the legitimate concerns that people have about them. As I said in Part I, if you design the very best possible interview process, you will still be left with incomplete information. Whiteboard questions might be the worst way to evaluate engineers, except for all the others.

Okay, so that’s about a thousand words and I still haven’t given an example of an actual whiteboard question yet. I have several questions that I use in interviews, but I’m not going to tell you what they are because… I still use them. Also, at least three of them are whiteboard questions that I myself have been given by other companies at one time, and it would be rude of me to post answers here. So I’m going to share one that I made up, and I think is pretty good, but I don’t ever plan on actually asking a candidate.

Imagine you are given some number of integer arrays, each with some number of integers in them. I need a function that will return all the integers, in order, that appear in one, and only one, of these arrays.

I will then draw a Venn diagram that looks something like this:

And I will draw (after apologizing for my horrible curly braces!) an example of the data that might be passed to this function:

{2, 4, 6, 10, 30}

{3, 6, 9, 15, 30}

{5, 10, 15, 25, 30}

And my expected result: {2, 3, 4, 5, 9, 25}

Then I hand them the marker and say, “go.” Usually what happens next is that they stand at the whiteboard and stare at it. After about 2 seconds I’ll say, “Oh, and by the way, if you stare silently at the whiteboard for 10 minutes and then write out a perfect solution, that would be really impressive but I would much rather you think out loud so I can get a feel for how you’re thinking about this problem.”

So, if you are thinking to yourself, “wait, wait, that’s not enough information” then you are exactly right. I have been deliberately vague in defining this problem because I want to get an answer to one of my questions about this candidate: “Did she ask clarifying questions?” There are several questions that could be asked about this problem as I have defined it. First, are the input arrays always sorted? (Yes.) Are there ever duplicate values in any of the input arrays? (Nope.) Are the sizes of the input arrays always the same? (No.) Can I modify the input arrays? (Hmm, let’s say no.) What language should I use? (Your choice.) How about Haskell? (Um… no.) So hopefully the candidate will ask at least one of these questions or possibly some other questions.

(Note that while I will often be deliberately vague in defining a whiteboard question, I won’t mislead the candidate. I don’t do trick questions. I’m not trying to stump anyone.)

If the candidate asks one of those questions, I will respond with “good question!” and give them the answer. I really want to reward them for asking a good question and hope they will continue to do so. *You’re doing great, keep it up*. Also, I get to make a little plus in my notebook.

So if they don’t ask any of these questions and just start writing code I’ll observe for a bit to see if they are operating under the correct assumptions. If they are, then I’ll let them go, but if I think they might not be, I’ll interrupt and say, “oh, I should have said that the input arrays are sorted,” or whatever. I phrase that deliberately to make it my fault that I didn’t give them all the information. *Hey, we’re all human here; relax, you’re doing fine*.

So if they don’t ask a clarifying question is that a deal breaker? No, not even close. It’s definitely a plus if they do, but it’s just one piece of information, especially if the assumptions they have made all turn out to be correct. (The way I’ve written out the problem implies the answer to those questions, but it’s better to confirm them.)

As soon as they start writing code I get an opportunity to answer another one of my questions, “Did he ‘define the contract’ (API)”? Basically I’m looking for the function signature for the code about to be written. This is really important because it defines what the customer (in this case, me) wants from the program and what the engineer is promising to deliver. Nailing this down is really important. I’ve had candidates just start writing code with some variable definitions and a for loop. When this happens I will stop them and say, “let’s write the function declaration first.”

Depending on the problem this can lead to some interesting questions and discussions. Should the output be a return value or a pointer/reference parameter? Should any of the parameters be constant and why/why not? You can sometimes get a feel for a candidate’s experience level how this API is defined. Again, be cautious; it’s one data point, not a deal breaker if they screw this up. They could be trying something clever to impress you or they could just have a brain freeze; it happens.

For this particular problem, let’s assume that they come up with a function declaration that looks like this:

void UMI(const vector<int> *arrays, int arrayCount, vector<int> &out)

This is C++ of course, using the standard template library. UMI stands for Union Minus Intersection, which is kind of clumsy but okay. We’re going to accept a pointer to an array of constant int arrays, the number of arrays, and a reference to an out array parameter. Is this the best possible function declaration? Probably not, but I’m using it because it is convenient for showing different approaches to solving the problem.

Okay, so let’s assume that your candidate starts drawing and talking and after 10 minutes or so comes up with something like this:

void UMI(const vector<int> *arrays, int arrayCount, vector<int> &out) { out.clear(); for(int i = 0; i < arrayCount; i++) { const vector<int> &a = arrays[i]; for(auto ai = a.begin(); ai != a.end(); ai++) { bool duplicate = false; for(int k = 0; k < arrayCount; k++) { if(k == i) continue; const vector<int> &a2 = arrays[k]; for(auto ki = a2.begin(); ki != a2.end(); ki++) { if(*ai == *ki) { duplicate = true; } } } if(!duplicate) { out.push_back(*ai); } } } std::sort(out.begin(), out.end()); }

Okay, so good news is that we’ve answered one question, “Did she get a valid solution?” affirmatively here. This code will run and it will get the right answer. That’s not insignificant. Sadly, many candidates you interview will not get a valid solution to the problem you are presenting, so something like this isn’t the worst thing you’ll see. That said, it’s not great.

This is an **n ^{2}** solution, where n is the total number of items in all the arrays. Now, for the small data set that I’ve given in my example, this will run in 0.0 milliseconds. No problem. But as n grows…. yeah, things get messy.

So now we can start to answer some of our other questions, “could he have a discussion about complexity?” Ask the candidate what the runtime complexity in big-O notation of this solution is. Can he tell you it is **n ^{2}**? Does he understand that is not desirable? I’ve had candidates tell me they don’t know all the big-O notations for common algorithms by heart, and I’m like, that’s fine, that’s easily found. But I’ve also had candidates tell me that they don’t really care about big-O notation, and that’s… not as fine. I mean, it’s a weird thing to say to an interviewer because obviously I asked about it so I probably care.

So I often get **n ^{2}** or similar type answers to whiteboard problems, and I will ask, “how can we improve this answer?” We’ll talk about it some more and I will often give them enough hints to get to a better answer. Sometimes they will offer improvements on the solution they’ve already provided. For example, in the solution above, the candidate might suggest breaking out of the innermost for loop when the inner value being tested exceeds the outer value. That’s a pretty good improvement, but it’s only an incremental change. It doesn’t fundamentally change the runtime complexity of the solution; it will still be

So would this solution be a deal breaker? Again, no. A lot of candidates will go right to the naive solution because they want to get a working, valid solution on the board. I’ve had really good engineers say they *always* start with the naive solution and use it as a test case to confirm their trickier, more optimal solutions. The important thing is that they should be able to understand *why* the complexity matters.

Okay, now let’s look at another possible solution to this problem, this time using an ordered map.

void UMI(const vector<int> *arrays, int arrayCount, vector<int> &out) { map<int, int> counts; out.clear(); for(int a = 0; a < arrayCount; a++) { for(auto i = arrays[a].begin(); i != arrays[a].end(); i++) { auto mitr = counts.find(*i); if(mitr == counts.end()) counts[*i] = 1; else mitr->second++; } } for(auto mitr = counts.begin(); mitr != counts.end(); mitr++) { if(mitr->second == 1) out.push_back(mitr->first); } }

Okay, so this code is much better. It dumps all the values from all the arrays into a map and keeps track of all their counts. Afterwards, it adds any value with a count of one to the out array. The runtime complexity of this solution is **n log n**, which is a major improvement.

Again, if someone comes up with this solution, we can answer all those questions that we answered before. Does she understand why the complexity is **n log n**? Can she have a discussion about it?

So if I got this solution I’d still ask the candidate, can we make this solution better? This is going to be harder to do; it’s not as low-hanging fruit as the naive solution. The key to getting a better solution here is to point out that the arrays are already sorted, so putting them all into a map is duplicating effort. We don’t need to do that. Instead we can look at the “frontier” (the first element) of all our arrays and look for the lowest value. If that lowest value is unique, then we can add it to the out array and increment the iterator on that array. If it is not unique, we increment all the iterators for that value and look at the next lowest value. We do this until we have looked at all elements in all arrays.

This is the kind of problem where I think the whiteboard is a help. The way to implement what I described is to have a heap (or priority queue) of all the arrays. With a whiteboard a candidate can just say, “I’m going to make a heap of all these arrays, sorted such that the lowest value pops to the top” and I’ll say, “yep, sure.” I don’t need them to write that heap. If someone describes a weird data structure that I’m not familiar with, I might ask them to sketch out how it works, but I’m good with just hand-waving a heap. I don’t care if they don’t get the exact syntax right for defining their templates or the exact function calls (is it pop() or pop_front() or…?) for using them.

Here’s the code to implement that algorithm.

class BookmarkedArray { const vector<int> *m_array; vector<int>::const_iterator m_itr; public: BookmarkedArray(const vector<int> *ar) : m_array(ar) { m_itr = m_array->begin(); } bool operator < (const BookmarkedArray &other) const { return (*m_itr < *other.m_itr); } bool operator > (const BookmarkedArray &other) const { return (*m_itr > *other.m_itr); } const BookmarkedArray &operator =(const BookmarkedArray &other) { m_array = other.m_array; m_itr = other.m_itr; return *this; } bool AtEnd() const { return m_itr == m_array->end(); } int Value() const { return *m_itr; } void Next() { m_itr++; } }; void UMI(const vector<int> *arrays, int arrayCount, vector<int> &out) { priority_queue<BookmarkedArray, vector<BookmarkedArray>, std::greater<BookmarkedArray>> q; for(int i = 0; i < arrayCount; i++) { q.push(BookmarkedArray(arrays + i)); } out.clear(); int currentValue = 0; int currentCount = 0; while(!q.empty()) { if(currentCount == 0) { BookmarkedArray ba = q.top(); q.pop(); currentValue = ba.Value(); currentCount = 1; ba.Next(); if(!ba.AtEnd()) q.push(ba); } else { BookmarkedArray ba = q.top(); q.pop(); int value = ba.Value(); ba.Next(); if(!ba.AtEnd()) q.push(ba); if(value == currentValue) { currentCount++; } else { if(currentCount == 1) { out.push_back(currentValue); } currentCount = 1; currentValue = value; } } } if(currentCount == 1) { out.push_back(currentValue); } }

On a whiteboard, I don’t care about the BookmarkedArray class. Saying you have a heap of arrays is fine. But in a full IDE, I have to define it, and it’s wasting time. I want to talk about this algorithm, not waste time getting the syntax exactly right.

So the time-complexity of this function is **n log q**, where **q** is the count of arrays instead of the number of items in the data set, or **n**. That can be a pretty significant savings. So running all these different approaches to the same problem on a data set of 421,021 items in 15 arrays leads to some eye-opening results. (Source code here.) All three of these approaches produce the exact same output, but the time is significantly different:

UMI naive (15 arrays) = {19999 multiples of 5} Array sizes = {28033, 28081, 28150, 28065, 28074, 28094, 28037, 28111, 28035, 28 071, 28103, 28038, 27978, 28071, 28080, } = 421021 total elements.

Elapsed ticks = 110367

UMI using map (15 arrays) = {19999 multiples of 5} Array sizes = {28033, 28081, 28150, 28065, 28074, 28094, 28037, 28111, 28035, 28 071, 28103, 28038, 27978, 28071, 28080, } = 421021 total elements.

Elapsed ticks = 319

UMI using heap (15 arrays) = {19999 multiples of 5} Array sizes = {28033, 28081, 28150, 28065, 28074, 28094, 28037, 28111, 28035, 28 071, 28103, 28038, 27978, 28071, 28080, } = 421021 total elements.

Elapsed ticks = 10

Complexity matters! Now I’ve heard people in the game industry say that our datasets are usually small enough that understanding complexity isn’t crucial, but I completely disagree. I have seen, and the industry is full of similar stories, cases where a designer comes up with a feature request and an engineer codes it up and it works great. It sails through QA with no problems. Then it gets released out into the wild and the whole game blows up because the largest number of concurrent players it was tested with was like 2. And it’s an **n ^{2}** or worse,

Okay, so I hope that has given you something to think about as you think of how to go about evaluating potential engineering candidates. As I’ve said, whiteboard exercises work for me, but they might not work for you, and that’s okay. The important thing is that you have a strategy for answering the questions about the candidate that you need answered to make the best decision you can.

Thanks for reading. I hope to post some thoughts about diversity soon.

]]>

Consider the following simplified model of free throws. Imagine the rim to be a circle (which we’ll call C) that has a radius of 1, and is centered at the origin (the point (0,0)). Let V be a random point in the plane, with coordinates X and Y, and where X and Y are independent normal random variables, with means equal to zero and each having equal variance — think of this as the point where your free throw winds up, in the rim’s plane. If V is in the circle, your shot goes in. Finally, suppose that the variance is chosen such that the probability that V is in C is exactly 75 percent (roughly the NBA free-throw average).

But suppose you switch it up, and go granny-style, which in this universe eliminates any possible left-right error in your free throws.

What’s the probabilityyou make your shot now? (Put another way, calculate the probability that |Y| < 1.)

So I’ve been doing all these Riddler exercises in C++ but this week I thought I’d mix it up and give the Go language a shot. I’ve never written anything in Go but a friend of mine sent me this text a while back:

As you can see, I have an exciting social life. So anyway I’ve been wanting to try it out for a while and I figured this blog would be a good opportunity. I’ve been doing the Go tutorials that you can find here.

Another benefit of doing the code (when possible) in Go is that I can link to a version of my code that you can run yourself in the browser, and then make changes to see the impact of those changes. I’m not sure that anyone was copying my C++ code into a compiler, creating a project, building, then running it. But this way running the code is literally just a click away.

Okay, so I thought this week’s Riddler was pretty straightforward from a “getting an answer” point of view. (Which probably means I’m totally wrong.) If I simulate a normalized random distribution of 2D points with some variance such that 75% of those points fall inside a circle with radius 1, then what would be the probability of falling in the circle if the x-component was always 0?

Luckily, Go has a really nice NormFloat64 function that generates normalized random numbers with a standard deviation of 1. With such variance, about 40% of random points will be inside the circle, using the good old Pythagorean Theorem. **If** I understand the problem correctly all I need to do is find the standard deviation that will result in 75% of points falling inside the circle. Then, I can calculate the magnitude of points where the x-component is 0, and find how frequently they fall in the circle.

So the big task here was to find the right value for the deviation factor. I did this with trial and error and came up with 0.60054. I’m sure there’s a math way to find this value (or a better value) and I look forward to seeing it (Attn: Alex!). Once you start generating x- and y-components, find their magnitude using Sqrt(x*x + y*y) and see if it is below 1. If it is, then that’s a made shot. Also, check to see what would happen if the x-component is 0. You can do that as Sqrt(0*0 + y*y) or Sqrt(y*y) or Abs(y).

Here’s the code in Go:

// 538 Riddler: Should You Shoot Free Throws Underhand? // http://fivethirtyeight.com/features/should-you-shoot-free-throws-underhand/ // Jon Wiesman, somedisagree.com package main import ( "fmt" "math" "math/rand" ) func main() { overhand := 0 granny := 0 trials := 1000000 r := rand.New(rand.NewSource(int64(538))) dev := 0.60054 for i := 0; i < trials; i++ { x := r.NormFloat64() * dev y := r.NormFloat64() * dev d := math.Sqrt(x*x + y*y) g := math.Abs(y) if d < 1 { overhand++ } if g < 1 { granny++ } } fmt.Printf("Overhand = %f%%\n", float64(overhand)*100.0/float64(trials)) fmt.Printf("Granny = %f%%\n", float64(granny)*100.0/float64(trials)) }

And here’s a link that will take you to a Go playground where you can run that code and see these results:

```
Overhand = 75.000800%
Granny = 90.439500%
```

Try it! You can also play around with different random seed values (I used 538, natch) and see what happens. I think the actual make percentage for Granny-style is more like 90.41%, depending on random seed and number of trials.

Thanks for reading!

]]>

This week marks my one-year anniversary of working at MobilityWare, and over this last year I have been asked to conduct quite a few interviews of software engineering candidates. I’ve probably conducted interviews at about three times the rate I did them at Carbine. I used to really dread being tasked with an interview, mostly because I suspected that I wasn’t very good at them. And I was right, I used to be terrible. But over the last couple of years as I’ve done more, I’ve started to really enjoy it, and I hope that is because I have gotten better.

Sometimes I interview people alone but often I will be paired with another interviewer, and sometimes we will have three people interviewing a candidate. This has given me an opportunity to not only see how others conduct interviews but also get feedback from peers on my own interviewing. So over the years, and the past year especially, I’ve learned quite a bit. Our HR director and I are doing an in-house presentation on interviewing techniques, and so I’m using these blog posts as kind of a way to organize my thoughts about that.

If you do a web search on “how to conduct a technical interview” you will find all sorts of advice and quite a bit of it will be contradictory. A lot of the writers will be very adamant that their way to conduct interviews is the right way and the way others do interviews is the wrong way. So right off the bat I just want to say that these thoughts about interviewing engineers are what I’ve found work for *me*. I don’t know if they’re the best ideas and I’m sure that there will be things that you may disagree with. If you are looking for how to conduct a technical interview, then I suggest looking at as many of those search results as possible. If this post also helps you, then great, but the mere fact that you have actively sought out ways to improve your interviewing skills probably means you will be better than 80% of the people out there conducting interviews.

I should say that if you just have time to read one and only one post about technical interviews, it should probably be this one by Joel Spolsky. I mean, I don’t agree with all of it and I think you should keep reading mine, but if you only have time for one, then yeah, follow that link. It’s cool.

Interviewing is really important because hiring is really important. Employees are what truly differentiate one company from any other company, and companies that want to do great things must find and hire great people, and it’s really hard to do it well. In my 8 years at Carbine we grew from about 25 people to over 225, and since I started at MobilityWare we’ve almost doubled in size and we are still growing. This has meant a lot of interviewing, and a lot of decisions on whether to hire or, more commonly, not hire candidates.

You should want to do interviews. This goes for everyone, but especially for engineers. I know they are a time sink and I know they interfere with whatever tasks you are currently working on, but you should want to be involved in the process of choosing who you will be working with and with whose work you will be interacting. It’s hard to find good engineers and you will often have to be involved in tough “no” decisions, but the consequences of a bad hire are far more costly than missing out on a good candidate. If you have ever worked with engineers who seemed completely unfit for their jobs and you’ve wondered how they got those jobs in the first place, then you should want to make sure it doesn’t happen again.

Even if you are unconvinced that this stuff is really important, I guarantee your HR director thinks it is, so at the very least, you should be happy to be asked to help out with the process. If you haven’t been asked, volunteer. Ask to be part of the process as an observer. Take some initiative and learn how to do this. It’s important. (Have I mentioned it is important?)

Becoming a good interviewer will make you better at doing your job, regardless of what kind of job it is, but especially for engineering. In preparing to conduct an interview you need to be proficient in whatever you are interviewing for. You will need to be prepared to answer questions about the problems you are posing. Despite all that preparation you will sometimes be surprised at a new approach to the problems you discuss in your interview. Remember that many of the engineers you interview will actually be really good at engineering, and so you will actually learn from them as you evaluate them.

So if you are still reading, then hopefully you agree that this stuff is important, but here’s the thing: it’s also really hard. If you conduct the perfect interview, or design the perfect interview process, you will still collect only a fraction of the information necessary to truly evaluate a candidate. You are going to have to make a decision based on incomplete information, so it’s really important that you maximize the time that you have to get as much information as you can to make that decision.

You’ve no doubt heard the expression, “there’s no such thing as a stupid question.” Well, that’s wrong. Or at least it’s wrong if you are interviewing someone. Just as lawyers are taught to never ask a question in a courtroom when you don’t already know the answer, interviewers should never ask a question if the answer won’t help determine whether to hire the candidate or not. You have a finite number of questions; don’t waste them. If I ask the candidate’s favorite baseball team and learn it is the Minnesota Twins, what can I do with that information? (If it’s the Dodgers, sure, that’s a no-hire, but the Twins? I got nothing.)

Prepare a list of questions for yourself about the candidate. These aren’t questions you will be asking the candidate, but questions you hope to answer during the interview. Every question/problem/scenario you pose to the candidate should be about helping you answer these questions. For one of my programming questions, these questions are things like:

- Did she ask clarifying questions?
- Did he “define the contract” (API)?
- Did she get a valid solution? Naive, better, optimal?
- What was the time/space complexity of her solution(s)?
- Could he have a discussion about complexity?

Once you have defined the questions you want answered, you need to find questions/problems that will help you answer those questions. I use whiteboard programming questions. Now, some of you just read that last sentence and are preparing your virtual torches and pitchforks. If you look at that list of search results that I linked above, you will find all sorts of very angry writers who have declared that whiteboard questions are dead, that they are worthless, and that they have no place in an interview process. I’ll go into more detail about why I like whiteboard questions, with examples, in a future post, but for now I will just say that I understand why people don’t like them, but I still find them useful.

Make sure to prepare more questions than you will ever need for the time you have been allotted. Do NOT find yourself in an interview with 30 minutes to kill and no questions to ask, or nothing to talk about. I like to have some tough problems ready for candidates who sail through my standard questions and some easier questions for people who just flame out on the standard questions.

In general, don’t try out a brand new interview question on a candidate for the first time. Ask a coworker first, so you can see how it is answered. Your cool new problem might have a really simple solution that you didn’t think of, or it might be too complex for a 45- to 60-minute interview. It might have only one solution, or all the solutions might be of equal time/space complexity, and you probably want to have bad/okay/better/best solutions. Again, I’ll talk about some examples in a later post.

Finally, get together with your fellow interviewers and make sure you have a plan for how you will be tackling the interview(s). Make sure everyone knows what questions they will be asking and when they will be asking them. Don’t find yourself in a situation where you are desperately trying to fill time because all of your questions were asked already. It’s unprofessional and you’re wasting precious time.

Now that you’re prepared it’s time to do the actual interview. First and foremost, please try to remember your first job interview. If you’re a normal person you were probably really nervous. Likewise, the person you are interviewing is probably going to be nervous as well. Try to do something to make them more comfortable.

Since I do whiteboard problems, one of the first things I do is write on the whiteboard some information about the first problem. This always involves me needing to draw a set of curly braces { }. I cannot draw curly braces. At all. If I’m really lucky, they end up looking like a capital E and a 3, but more often they end up looking like weird, mismatched squiggles.

After drawing these weird squiggles, I apologize to the candidate and explain that I can’t draw curly braces and ask them to imagine those weird squiggles are curly braces. I do this every time. My fellow interviewers are sick of me doing this. But I’m trying to accomplish a couple things. One, I just want to poke fun at myself to let the candidate know I’m not taking myself too seriously here. Two, I want to “give permission” to the candidate to not worry too much about their own writing. I don’t care at all about penmanship. Or how neatly they draw semi-colons or brackets or even if the brackets are lined up. I’m not always successful with this; I’ve had candidates draw and erase the same line of code several times because they think it’s sloppy and are really trying to make a good impression. I try to encourage them to move on, but sometimes you need to let them do whatever they need to do to cope with the situation. Show some empathy here; it’s tough.

I’ll talk more about the whiteboard problems in Part Two. Regardless of what kind of interview you are conducting there are a couple things you should keep in mind. First, remember that the purpose of this interview is for you to evaluate the technical qualifications of the candidate. You are not there to prove to the candidate how smart you are. I’ve seen this from interviewers that seem to feel the need to show that they belong in the room; trust me, the candidate believes you are a smart person. You don’t need to prove it.

That leads me to the second thing to remember. You are representing your company. Be nice to the candidates! If you end up being impressed with them, you want them to accept a job offer; candidates are much less likely to want to work with you if you demonstrate a thin skin or insecurity in the interview. In fact, you want the candidate to walk away from the interview excited about the possibility of working at your company, even if you ultimately decide (or have already decided) that this person is a no.

Eventually, the interview ends and you have to make a decision. I fill out a score card with notes I’ve taken and I immediately give a thumbs up or down on whether I think we should hire the candidate. There is no maybe. On this particular point, I am in agreement with Joel Spolsky: if the candidate is a maybe, that’s a no. I make this decision before talking with any other interviewers because I don’t want to be influenced. Afterwards, if there is disagreement, and there often is, we will have a meeting where we discuss the candidate. I can be persuaded of course, but I record that initial score immediately.

Finally, and this is something I will talk about in another post at more length, it’s really important to do as much as possible to eliminate bias from your decision. All people have bias of course, but diversity is so important to healthy successful companies. Many companies rate prospective candidates on “culture fit” and that’s okay, but be careful to ensure that you are talking about the company culture, not your personal culture. You want to choose candidates that you can work with, not necessarily candidates that you want to hang out with. Don’t let your company turn into a bunch of carbon copies of yourself; that might sound great to you, but it’s a recipe for disaster.

Of course one way to protect against that is to make sure interviews are not conducted by the same people all the time. Once you have become a regular interviewer at your company start reaching out to other engineers to get them involved in the interview process. Convince them that it is in their best interest to have a say in the process. It will make them better engineers and it will help your company hire better engineers.

Thanks for reading! If you found this helpful or at least hilarious in its wrongness, look for Part Two next week.

]]>

Two players go on a hot new game show called “Higher Number Wins.” The two go into separate booths, and each presses a button, and a random number between zero and one appears on a screen. (At this point, neither knows the other’s number, but they do know the numbers are chosen from a standard uniform distribution.) They can choose to keep that first number, or to press the button again to discard the first number and get a second random number, which they

mustkeep. Then, they come out of their booths and see the final number for each player on the wall. The lavish grand prize — a case full of gold bullion — is awarded to the player who kept the higher number. Which number is the optimal cutoff for players to discard their first number and choose another? Put another way, within which range should they choose to keep the first number, and within which range should they reject it and try their luck with a second number?

So, right away a naive solution would be to suggest that contestants should reject any number below 0.5 and accept anything equal to or above 0.5. I wrote a quick Monte Carlo (natch) to test various thresholds for two contestants. Here’s that code:

#define PRECISION "%.8f" double GetScore(double thresh) { double score = RandDouble(); if(score >= thresh) return score; return RandDouble(); } int GameShow(double aThresh, double bThresh) { double a = GetScore(aThresh); double b = GetScore(bThresh); if(a > b) return 0; if(b > a) return 1; return -1; } double MCGameShow(int trials, double a, double b, bool verbose = false) { int aWins = 0; int bWins = 0; for(int i = 0; i < trials; i++) { int res = GameShow(a, b); if(res == 0) aWins++; if(res == 1) bWins++; } double pct = (double)aWins / (double)(aWins + bWins); if(verbose) { printf("MC: With A " PRECISION " and B " PRECISION ", A wins " PRECISION "%%.\n" , a, b, pct * 100.0); } return pct; }

And here’s some output for various thresholds:

MC: With A 0.50000000 and B 0.75000000, A wins 51.55000000%. MC: With A 0.50000000 and B 0.25000000, A wins 54.83000000%. MC: With A 0.50000000 and B 0.60000000, A wins 49.77000000%. MC: With A 0.50000000 and B 0.40000000, A wins 51.73000000%.

Okay, so we can see that 0.5 is not the best number because it loses more often than not against 0.6 for example. So it’s obvious that we’re going to need some better analysis to figure this one out.

This problem is an example of trying to find a Nash Equilibrium, a set of values for each contestant such that any deviation from those values by one contestant results in an advantage for the other contestant. So my strategy for finding this equilibrium is to start with some threshold, say 0.5, and make small adjustments to it. If the new value does better against the previous value, keep going further. When I find a value that seems to represent a local maxima, begin again with smaller adjustments. Continue this process until the size of adjustments falls below some threshold.

Here’s some code do do that:

double FindNashMC(bool verbose = false) { const int trials = 100000; double a = 0.5; double inc = 0.1; while(true) { double pctInc = MCGameShow(trials, a + inc, a, verbose); if(pctInc > 0.5) { a = a + inc; } else { double pctDec = MCGameShow(trials, a - inc, a, verbose); if(pctDec > 0.5) { a = a - inc; } else { inc /= 10.0; if(inc < 0.000000001) break; } } } if(verbose) { printf("Nash rejection rate = " PRECISION "\n", a); } return a; }

And here’s some output from that:

MC: With A 0.60000000 and B 0.50000000, A wins 50.49901996%. MC: With A 0.70000000 and B 0.60000000, A wins 49.63398536%. MC: With A 0.50000000 and B 0.60000000, A wins 49.35948078%. MC: With A 0.61000000 and B 0.60000000, A wins 50.09150641%. MC: With A 0.62000000 and B 0.61000000, A wins 50.30101204%. ... [lots snipped] MC: With A 0.64089890 and B 0.64089900, A wins 49.82999320%. MC: With A 0.64089901 and B 0.64089900, A wins 49.96599932%. MC: With A 0.64089899 and B 0.64089900, A wins 49.88349417%. MC: With A 0.64089900 and B 0.64089900, A wins 49.96049881%. MC: With A 0.64089900 and B 0.64089900, A wins 49.89749282%. Nash rejection rate = 0.64089900

Well, that seems… reasonable. But is it correct? Running this code over and over shows some pretty high variance. In order to reduce that variance, it’s necessary to increase the number of trials to 100,000 or even 1,000,000, but then it’s slow. And still not necessarily correct.

Here are some other rejection thresholds that this same code spits out:

0.61001981

0.60980422

0.60098909

0.63110990

0.59025124

That’s a pretty wide variance there, and it makes sense. The algorithm depends on lots of decisions to be made based on Monte Carlo simulations, all of which will have a good deal of variance. If any of those decisions are wrong, the end result is doomed. Unless we are willing to run literally billions of trials, we are not going to be able to get a precise answer beyond a few decimal places.

So we really need a better way to do this. Is it possible to use, ugh, *math* to find the answer? Of course it is. So this game show boils down to one of four possibilities that might happen. These are: 1) both contestant A’s and B’s first numbers are below their thresholds, 2) A’s first number is good, but B’s is rejected, 3) A rejects and B accepts the first number, and 4) both A and B accept their first number.

The odds of each of these scenarios is: 1) a * b, 2) (1-a) * b, 3) a * (1-b), and 4) (1-a) * (1-b).

Okay, so now we know the odds of each scenario, now we must multiply those likelihoods by the likelihood that a will win in each scenario. This boils down to… you know what? Let me just show you the code:

double AWinPct(double a, double b, bool verbose = false) { if(a == b) return 0.5; double odds1 = a * b; // (a < thresh, b < thresh) double odds2 = (1 - a) * b; // a > thresh, b < thresh double odds3 = a * (1 - b); // a < thresh, b > thresh double odds4 = (1 - a) * (1 - b); // a > thresh, b > thresh double aWins1 = 0.5; double aWins2 = a + (1 - a) * 0.5; double aWins3 = (1 - b) * 0.5; double aWins4 = AOverB(1 - b, 1 - a); double pct = odds1 * aWins1 + odds2 * aWins2 + odds3 * aWins3 + odds4 * aWins4; if(verbose) { printf("Math: With A " PRECISION " and B " PRECISION ", A wins " PRECISION "\n", a, b, 100.0 * pct); } return pct; }

Okay, so the first scenario is easy. Both contestants reject their first number and accept whatever their second number is, so 50% that either will win.

In the second scenario we know that A’s score is at least as high as a. So there’s an **a** chance that B’s second number will be below that number. If B’s second number exceeds a, then it will be uniformly spread between a and 1, so can be represented by (1-a) * 0.5.

The third scenario is the opposite of scenario 2. We know that B’s number is higher than b, so A’s number needs to exceed it. There is a (1-b) chance that it will do that, and then there’s a 50% chance that it will exceed B’s number.

The fourth scenario is one in which both contestants were happy with their first numbers. What are the chances that A beats B here? So, in order to answer that we need another function called AOverB, which takes two ranges and calculates the odds of the first value being higher than the second. But we are going to pass 1-b for A, and 1-a for B. Why? Okay, bear with me here. Let’s say that A’s threshold is 0.6 and B’s threshold is 0.5. Now we know that A’s number will be between 0.6 and 1.0 and B’s will be between 0.5 and 1.0, and that they will be evenly distributed. So B’s range is 0.5 and A’s is only 0.4, but we need to reverse those because A’s range starts at 0.6.

I hope that makes sense. Here’s the code for AOverB:

double AOverB(double a, double b, bool verbose = false) { double dPct = 0.5; if(a > b) { double dOddsOver = (a - b) / a; dPct = dOddsOver + (1 - dOddsOver) * 0.5; } else if(a < b) { dPct = 1 - AOverB(b, a); } if(verbose) { printf("Math " PRECISION " over " PRECISION " = " PRECISION "\n", a, b, dPct); } return dPct; }

Okay, so now we have all our scenario odds and all our chances of winning each scenario. The final answer is obtained by adding up the products of each scenario’s occurrence rate with its win rate.

Now, at long last, we can rewrite our Nash function to use AWinPct to find the *exact* winning percentage for any two rejection thresholds and almost immediately:

double FindNash(bool verbose = false) { double a = 0.5; double inc = 0.1; while(true) { double pctInc = AWinPct(a + inc, a, verbose); if(pctInc > 0.5) { a = a + inc; } else { double pctDec = AWinPct(a - inc, a, verbose); if(pctDec > 0.5) { a = a - inc; } else { inc /= 10.0; if(inc < 0.000000001) break; } } } if(verbose) { printf("Nash rejection rate = " PRECISION "\n", a); } return a; }

And here’s the output:

Math: With A 0.60000000 and B 0.50000000, A wins 50.50000000 Math: With A 0.70000000 and B 0.60000000, A wins 49.40000000 Math: With A 0.50000000 and B 0.60000000, A wins 49.50000000 Math: With A 0.61000000 and B 0.60000000, A wins 50.01200000 Math: With A 0.62000000 and B 0.61000000, A wins 50.00090000 Math: With A 0.63000000 and B 0.62000000, A wins 49.98970000 Math: With A 0.61000000 and B 0.62000000, A wins 49.99910000 Math: With A 0.62100000 and B 0.62000000, A wins 49.99969900 Math: With A 0.61900000 and B 0.62000000, A wins 50.00018900 Math: With A 0.62000000 and B 0.61900000, A wins 49.99981100 Math: With A 0.61800000 and B 0.61900000, A wins 50.00007710 Math: With A 0.61900000 and B 0.61800000, A wins 49.99992290 Math: With A 0.61700000 and B 0.61800000, A wins 49.99996530 Math: With A 0.61810000 and B 0.61800000, A wins 49.99999957 Math: With A 0.61790000 and B 0.61800000, A wins 49.99999931 Math: With A 0.61801000 and B 0.61800000, A wins 50.00000003 Math: With A 0.61802000 and B 0.61801000, A wins 50.00000002 Math: With A 0.61803000 and B 0.61802000, A wins 50.00000001 Math: With A 0.61804000 and B 0.61803000, A wins 50.00000000 Math: With A 0.61802000 and B 0.61803000, A wins 49.99999999 Math: With A 0.61803100 and B 0.61803000, A wins 50.00000000 Math: With A 0.61803200 and B 0.61803100, A wins 50.00000000 Math: With A 0.61803300 and B 0.61803200, A wins 50.00000000 Math: With A 0.61803400 and B 0.61803300, A wins 50.00000000 Math: With A 0.61803500 and B 0.61803400, A wins 50.00000000 Math: With A 0.61803300 and B 0.61803400, A wins 50.00000000 Math: With A 0.61803410 and B 0.61803400, A wins 50.00000000 Math: With A 0.61803390 and B 0.61803400, A wins 50.00000000 Math: With A 0.61803401 and B 0.61803400, A wins 50.00000000 Math: With A 0.61803399 and B 0.61803400, A wins 50.00000000 Math: With A 0.61803400 and B 0.61803399, A wins 50.00000000 Math: With A 0.61803398 and B 0.61803399, A wins 50.00000000 Math: With A 0.61803399 and B 0.61803399, A wins 50.00000000 Math: With A 0.61803399 and B 0.61803399, A wins 50.00000000 Nash rejection rate = 0.61803399 Comparing Nash rejection rate vs. 50% via math and Monte Carlo: Math: With A 0.61803399 and B 0.50000000, A wins 50.43052317 MC: With A 0.61803399 and B 0.50000000, A wins 50.43086405%.

Finally, after finding the Nash rejection rate, I called AWinPct with it and 0.5 and compared that number to calling MCGameShow with it and 0.5, and one million trials. The results were very close, so I felt pretty good about the AWinPct function, but not 100% sure; there were all sorts of ways I could have done something wrong. But then as I was looking at it I had a terrible thought: what is the Golden Ratio? And the answer is… 1.61803399. So my proposed answer is the fractional part of the Golden Ratio? Seems legit.

Here’s the link to the source code.

Thanks for reading!

]]>

Here’s this week’s problem:

Here are four different coded messages. Each has a different encryption scheme and they progress, I think, from easiest to toughest to crack. Submit the decrypted messages as your answers.

1. A zsnw kmuuwkkxmddq kgdnwv lzw XanwLzajlqWayzl Javvdwj!

2. xckik acvlbeg oz mmqn xnlautw. gzag, mwcht, kbjzh… ulw cpeq edr mom dhqx lksxlioil?

3. hy vg nw rh ev pr is or tf?

4. , ’ . . , ’ .

So the first two codes I was able to get in very little time. I suspected that the first one would be a simple Caesar, (aka Wheel or Rotation) cipher. I wrote a simple function that decrypts characters by shifting letter values based on a shift parameter, and wrapping if necessary.

#define ABC_SIZE 26 wchar_t WheelDecodeChar(wchar_t ch, int shift) { wchar_t decoded = 0; if(ch >= 'a' && ch <= 'z') { decoded = ch + shift; while(decoded > 'z') decoded -= ABC_SIZE; while(decoded < 'a') decoded += ABC_SIZE; } else if(ch >= 'A' && ch <= 'Z') { decoded = ch + shift; while(decoded > 'Z') decoded -= ABC_SIZE; while(decoded < 'A') decoded += ABC_SIZE; } else { decoded = ch; } return decoded; } void WheelDecode(const wchar_t *message, int shift) { wchar_t decoded[256] = {}; int len = wcslen(message); for(int ch = 0; ch < len; ch++) { decoded[ch] = WheelDecodeChar(message[ch], shift); } wprintf(L"Possible message = %s\n", decoded); } void WheelDecodeAll(const wchar_t *message) { for(int i = 0; i < ABC_SIZE; i++) { WheelDecode(message, i); } }

When I called WheelDecodeAll I got this output:

Possible message = A zsnw kmuuwkkxmddq kgdnwv lzw XanwLzajlqWayzl Javvdwj! Possible message = B atox lnvvxllyneer lheoxw max YboxMabkmrXbzam Kbwwexk! Possible message = C bupy mowwymmzoffs mifpyx nby ZcpyNbclnsYcabn Lcxxfyl! Possible message = D cvqz npxxznnapggt njgqzy ocz AdqzOcdmotZdbco Mdyygzm! Possible message = E dwra oqyyaoobqhhu okhraz pda BeraPdenpuAecdp Nezzhan! Possible message = F exsb przzbppcriiv plisba qeb CfsbQefoqvBfdeq Ofaaibo! Possible message = G fytc qsaacqqdsjjw qmjtcb rfc DgtcRfgprwCgefr Pgbbjcp! Possible message = H gzud rtbbdrretkkx rnkudc sgd EhudSghqsxDhfgs Qhcckdq!Possible message = I have successfully solved the FiveThirtyEight Riddler!Possible message = J ibwf tvddfttgvmmz tpmwfe uif GjwfUijsuzFjhiu Sjeemfs! Possible message = K jcxg uweeguuhwnna uqnxgf vjg HkxgVjktvaGkijv Tkffngt! Possible message = L kdyh vxffhvvixoob vroyhg wkh IlyhWkluwbHljkw Ulggohu! Possible message = M lezi wyggiwwjyppc wspzih xli JmziXlmvxcImklx Vmhhpiv! Possible message = N mfaj xzhhjxxkzqqd xtqaji ymj KnajYmnwydJnlmy Wniiqjw! Possible message = O ngbk yaiikyylarre yurbkj znk LobkZnoxzeKomnz Xojjrkx! Possible message = P ohcl zbjjlzzmbssf zvsclk aol MpclAopyafLpnoa Ypkksly! Possible message = Q pidm ackkmaancttg awtdml bpm NqdmBpqzbgMqopb Zqlltmz! Possible message = R qjen bdllnbboduuh bxuenm cqn OrenCqrachNrpqc Armmuna! Possible message = S rkfo cemmoccpevvi cyvfon dro PsfoDrsbdiOsqrd Bsnnvob! Possible message = T slgp dfnnpddqfwwj dzwgpo esp QtgpEstcejPtrse Ctoowpc! Possible message = U tmhq egooqeergxxk eaxhqp ftq RuhqFtudfkQustf Duppxqd! Possible message = V unir fhpprffshyyl fbyirq gur SvirGuveglRvtug Evqqyre! Possible message = W vojs giqqsggtizzm gczjsr hvs TwjsHvwfhmSwuvh Fwrrzsf! Possible message = X wpkt hjrrthhujaan hdakts iwt UxktIwxginTxvwi Gxssatg! Possible message = Y xqlu ikssuiivkbbo ieblut jxu VyluJxyhjoUywxj Hyttbuh! Possible message = Z yrmv jlttvjjwlccp jfcmvu kyv WzmvKyzikpVzxyk Izuucvi!

So, we can see the 8th line is the answer we are looking for.

The second question is a little harder, but I was able to get it pretty easily. From looking at the string, “xckik acvlbeg oz mmqn xnlautw. gzag, mwcht, kbjzh… ulw cpeq edr mom dhqx lksxlioil?” you can kind of tell that the encryption alphabet must be changing. I was able to guess that this was probably a Vigenère cipher and wrote a function to modify the shift value passed into WheelDecodeChar based on a key. This works by using the first letter of the key as the shift value for the first letter, then the second letter in the key for the second letter, and so on. If the key is shorter than the message, you just go back to the beginning of the key.

void WheelDecodeKey(const wchar_t *message, const wchar_t *key) { int keyLen = wcslen(key); int msgLen = wcslen(message); wchar_t decoded[256] = {}; int keyIdx = 0; for(int i = 0; i < msgLen; i++) { int shift = tolower(key[keyIdx % keyLen]) - 'a'; decoded[i] = WheelDecodeChar(message[i], -shift); if(IsLetter(message[i])) keyIdx++; } wprintf(L"Possible message = %s\n", decoded); }

The only trick was finding which key to use. I tried “Riddler”, “TheRiddler”, and then finally tried “FiveThirtyEight” and got this:

Possible message =super tuesday is this tuesday. cruz, trump, rubio... who willwin the most delegates?

That looks about right. Cool, so I got the first two in pretty short order. How hard could the third one be?

What followed was about a full day of frustration before I decided to move on to the fourth one. The fourth one uses emojis instead of letters but I decided to fix that by making a function to assign letters to the emojis.

void MakeLettersFromEmojis(const wchar_t *message, wchar_t *out) { // SUPER UGLY CODE with lots of hard-coded assumptions map<unsigned int, wchar_t> mapping; int len = wcslen(message); wchar_t ch = 'a'; bool esc = false; int outIndex = 0; for(int i = 0; i < len; i++) { wchar_t m = message[i]; if(m == ' ' || m == '.' || m == ',' || m == '\'' || m == 0x2019) { if(m == 0x2019) m = '\''; out[outIndex] = m; outIndex++; continue; } unsigned int m2 = m; if(esc) { m2 = (u16esc << 16) | m; esc = false; } else if(m == u16esc) { esc = true; continue; } auto itr = mapping.find(m2); if(itr == mapping.end()) { mapping[m2] = ch; out[outIndex] = ch; outIndex++; ch++; } else { out[outIndex] = itr->second; outIndex++; } } out[outIndex] = 0; wprintf(L"%s\n", out); }

That produced the following output:

ab, cdec'f gc. hai jkc lemb ca ianb. dko, ec pkefc gc'f qngreo.

Now this isn’t a simple Caesar cipher. This is a completely random alphabet. There are some really interesting algorithms that use frequency analysis to come up with likely cipher alphabets, but I just solved this by hand. (This looks like a lot of fun to code; I might revisit it later.) With the apostrophes, I guessed that the “f” was an “s”, which led me to guess that the “c” was a “t”.

ab,tdet'sgt. hai jktlembta ianb. dko, etpkestgt'sqngreo. That led me to guess that "de" in the second word was "ha" to spell "that's". Also the "g" would be an "i". This leads to:

ab,that'sit. hai jktlambta ianb.hko,atpkastit'sqnirao.

So all those guesses were correct, which is not always how solving cryptograms go, but I got lucky. From there it was a matter of some trial and error to finally get:

ok, that's it. now get back to work. hey, at least it's friday.

Okay, now back to the third message and my frustration.

I spent the better part of a day trying to figure out what the encryption was for this string:

hy vg nw rh ev pr is or tf?

I decided it had to be digraphs and there’s an interesting cipher called a digraph substitution cipher that I coded up. I wrote code to generate all 676 possible digraph substitution alphabets based on shifting rows and columns by 26 characters. Then I used the output of those and scored each possible string by adding up the resultant digraphs and their frequency from this table. That was really cool except that this string wasn’t encrypted with a digraph substitution cipher, so none of the 676 possible alphabets produced any good candidates.

Finally I was chatting with a friend about it who asked, “have you tried Playfair?” The answer was no because I had never heard of Playfair. Here’s the wikipedia link for the Playfair cipher. It’s pretty cool, and JFK used it once!

Anyway, after trying a plain alphabet I used the key, “FIVETHRYG” (no repeated letters) and got the following output:

ar ey ou ha vi ng fu ny et?

(are you having fun yet?)

And that’s it! If I had had more time I probably would have coded that up, which would have been fun, but I spent too much time on the digraphs.

Thanks for reading.

]]>

Anyway, this week’s Riddler concerns the worst person alive and a full airplane.

There’s an airplane with 100 seats, and there are 100 ticketed passengers each with an assigned seat. They line up to board in some random order. However, the first person to board is the worst person alive, and just sits in a random seat, without even looking at his boarding pass. Each subsequent passenger sits in his or her own assigned seat if it’s empty, but sits in a random open seat if the assigned seat is occupied.

What is the probabilitythat you, the hundredth passenger to board, finds your seat unoccupied?

So the answer is 1/2, half, 50%, however you want to say it. And this is true for the last passenger no matter how many passengers are on a (full) plane. I actually worked this out before writing any code (but don’t worry, I still wrote the code!) It’s trivial to show for the case where there are only two passengers: there are two seats, and the person in front of you is the worst person alive. He either takes his seat, or yours, randomly. Fifty percent, done.

For three people (1. Worst Person Alive, 2. Rando Calrissian, and 3. You) and three seats, we can still do it in our heads. WPA takes one of three seats, so right away you have a 1/3 chance of not getting your seat. But there is also a 1/3 chance that he takes Rando’s seat, in which case Rando will randomly take one of the two remaining seats, yours or WPA’s assigned seat. One-third times one-half is 1/6, so there is a 1/6 chance you will find Rando sitting in your seat, and a 1/3 chance WPA will be sitting there. So 1/3 + 1/6 = 1/2. Boom, done.

Once I saw that the probability remained at 50% for three passengers, I suspected that it would be 50% for any number of passengers. I wrote a Monte Carlo to show that.

// AirplaneSeating.cpp : Defines the entry point for the console application. // www.somedisagree.com, Jon Wiesman #include "stdafx.h" #include <vector> #define SEAT_COUNT 100 #define EVIL_COUNT 1 #define AVAILABLE -1 class Plane { std::vector<int> m_seats; int m_available; public: Plane(int seats) { m_seats.resize(seats); for(int i = 0; i < seats; i++) m_seats[i] = AVAILABLE; m_available = seats; } void ClaimSeat(int seat, int passenger) { m_seats[seat] = passenger; m_available--; } int FindUnclaimedSeat() const { if(m_available == 0) return -1; int nth = rand() % m_available; int empty = 0; for(size_t i = 0; i < m_seats.size(); i++) { if(IsSeatAvailable(i)) { empty++; if(empty > nth) return i; } } return -1; } bool IsSeatAvailable(size_t seat) const { return (seat >= 0 && seat < m_seats.size() && m_seats[seat] == AVAILABLE); } void GetSeat(int passenger, bool evil) { if(evil || !IsSeatAvailable(passenger)) { int seat = FindUnclaimedSeat(); ClaimSeat(seat, passenger); } else { ClaimSeat(passenger, passenger); } } void SimulateBoarding(size_t evilCount) { for(size_t i = 0; i < m_seats.size(); i++) { GetSeat(i, (i < evilCount)); } } void CompileStats(std::vector<int> &stats, int &wrongSeats) { for(size_t i = 0; i < m_seats.size(); i++) { if(m_seats[i] == i) stats[i]++; else wrongSeats++; } } }; int _tmain(int argc, _TCHAR* argv[]) { std::vector<int> stats; int wrongSeats = 0; stats.resize(SEAT_COUNT); int trials = 100000; for(int i = 0; i < trials; i++) { Plane plane(SEAT_COUNT); plane.SimulateBoarding(EVIL_COUNT); plane.CompileStats(stats, wrongSeats); } printf("After %d trials with %d evil passengers:\n", trials, EVIL_COUNT); for(int i = 0; i < SEAT_COUNT; i++) { printf("Passenger %d seated properly %.2f%% of time\n", i, 100.0 * (double)stats[i]/(double)trials); } printf("Average of %.2f wrongfully seated passengers\n", (double)wrongSeats / (double)trials); return 0; }

This code (download) is pretty simple. The Plane class keeps an array of seats and who (which passenger) is seated in each seat. It also keeps the number of empty seats at any time to make it easier to quickly choose a random seat. (For the purpose of this simulation it was convenient to say that the passenger index and seat assignment are the same.)

The constructor initializes the seat array to all available and sets the available count to the seat count. SimulateBoarding tells each passenger to find a seat. Finding a seat is easy. If you’re not evil and your seat is available, sit down. Otherwise, find a random available seat. Either way, call ClaimSeat to sit down and decrease the available member variable. A CompileStats function will increment the number of correct passenger-seat assignments and keep a running count of wrong ones. Finally we run this simulation 100,000 times and print out the results.

After 100000 trials with 1 evil passengers: Passenger 0 seated properly 1.00% of time Passenger 1 seated properly 99.03% of time Passenger 2 seated properly 99.00% of time Passenger 3 seated properly 99.00% of time Passenger 4 seated properly 99.01% of time Passenger 5 seated properly 98.93% of time Passenger 6 seated properly 98.97% of time Passenger 7 seated properly 98.95% of time Passenger 8 seated properly 98.92% of time Passenger 9 seated properly 98.98% of time Passenger 10 seated properly 98.87% of time ..... Passenger 90 seated properly 90.86% of time Passenger 91 seated properly 90.03% of time Passenger 92 seated properly 88.83% of time Passenger 93 seated properly 87.42% of time Passenger 94 seated properly 85.39% of time Passenger 95 seated properly 83.11% of time Passenger 96 seated properly 79.98% of time Passenger 97 seated properly 74.74% of time Passenger 98 seated properly 66.45% of time Passenger 99 seated properly 49.95% of time Average of 5.19 wrongfully seated passengers

So I ran this code really just to confirm that my guess of 50% would be correct. However, I saw something which kind of jogged my memory. The average number of wrongfully seated passengers out of 100 was 5.19. This was the average number of traffic jams with 100 cars from two weeks ago! (The exclamation point at the end of the last sentence might be the nerdiest exclamation point ever.) It turns out that the number of wrongfully seated passengers follows a Harmonic Series.

What this means is that the odds of anyone getting a wrong seat follows this pattern, starting from the last passenger to board: 1/2 + 1/3 + 1/4 + … You can see from the output that the chances of each passenger not getting their seat is indeed following that pattern, with a little Monte Carlo variance thrown in.

“But wait,” you say, probably not, but let’s pretend you do, “a Harmonic Series starts with one and you don’t have a one in there.” Oh but I do. But it comes at the end. Remember that in our simulation, Worst Person Alive boards first and therefore has a 99/100 chance of sitting in the wrong seat. The next person to board, let’s call him Rando Paul, has a 1/100 chance of finding WPA in his seat. 99/100 + 1/100 = 1. The next person then has a 1/99 chance and on down to 1/2 for the last passenger. So the average number of wrongly seated passengers for N passengers with the Worst Person Alive seating first boils down to 1 + 1/2 + 1/3 + … + 1 / (N – 1).

Traffic jams and airline travel: unexpected sources of harmony. Thanks for reading!

]]>

The nice thing was that once I understood the solution, it was really easy to rewrite the SmartDuckStrategy to use the “J” strategy outlined in the post. The result is exactly what was predicted by Datagenetics. Here’s a GIF:

Yay! It’s nice when things work out, even if I didn’t solve this one (or get close) myself. The new code is a little simplistic in that it assumes the dog will start at a position of 0 degrees, but here it is:

DPoint SmartDuckStrategy::GetDestinationVector(Pond *pond) { if(pond->GetBeeline()) return pond->GetLastDuckVector(); DPoint focus(0, -1 / (pond->GetDogSpeed() * 2.0)); DPoint duckPos = pond->GetDuckPos(); DPoint delta = duckPos - focus; double angleFromFocus = atan2(delta.y, delta.x); if(angleFromFocus > -PI / 2.0 && angleFromFocus < 0) { pond->SetBeeline(); return DPoint(1, 0); } angleFromFocus += PI / 2.0; DPoint vec(cos(angleFromFocus), sin(angleFromFocus)); return vec; }

So I take some comfort that my underlying simulation was solid enough that just changing the behavior of the duck was sufficient to illustrate the right answer. It’s not much, but it’s all I have!

Thanks again to Alex and Lego for posting comments. It means a lot.

]]>

Things are turning a bit dark at the 538 Riddler as this week Ollie imagines a frantic dog and a beleaguered duck trapped in a never-ending Kafkaesque chase. My answer for last week’s puzzle was correct, but as I said later, it was much more complicated than it needed to be.

This week, however, I don’t think I will even get a correct-but-overly-complex answer. I’m positive that my answer is wrong.

Here’s the scenario for these poor creatures:

There is a duck paddling in a perfectly circular pond, and a Nova Scotia duck tolling retriever prowling on the pond’s banks. The retriever very much wants to catch the duck. The dog can’t swim, and this particular duck can’t take flight from water. But the duck does want to fly away, which means it very much wants to make it safely to land without meeting the dog there. Assume the dog and the duck are both very smart and capable of acting perfectly in their best interests — they’re both expert strategists and tacticians. Say the duck starts in the center of the pond, and the dog somewhere right on the pond’s edge. To ensure that the duck could never escape safely,

how many times fasterwould the dog have to be able to run relative to how fast the duck can swim? (Assume the dog is a good boy, yes he is, and thus very patient. The duck can’t simply wait him out. Also assume both animals can change direction without losing speed, and that the duck can take flight safely immediately after reaching shore if the dog isn’t there.)

Okay, so obviously there’s a mathy solution to this problem, but I don’t know it. I wrote a program to simulate the dog and duck and gave the duck some different strategies for getting to the shore. One thing that is obvious: the dog will never catch the duck. Either the duck will escape because the dog is too slow, or the dog will circle the pond forever as the duck swims back in forth in vain. Geez, Ollie, next to this, last week’s nightmare traffic problem is looking downright cheerful.

Designing the simulation was pretty fun, even if I don’t think it led me to the right solution. I basically came up with four different strategies that the duck might take to escape the pond. I call these Dumb, Cowardly, Clever, and Smart. Each strategy has a threshold at which it stops working, which is question being asked.

The Dumb strategy simply involves moving due west and hoping the dog is too slow. It will succeed as long as the dog is moving slower than PI * the duck’s speed.

The Cowardly strategy just tries to avoid the dog at all costs, changing the duck’s direction to be away from the dog. Interestingly, the Cowardly strategy does worse than the dumb strategy for the duck! The dog can “scare” the duck into remaining in the pond by getting close enough to scare it back towards the center of the pond, even when moving slower (0.9483) than PI * the duck’s speed. All ducks die, but a cowardly duck is doomed to swim in a pond forever or something.

The Clever strategy works like the Cowardly strategy except that the duck is constantly calculating how long it will take to reach the shore in a straight line, and whether the dog can make it there first. If the duck can make it to the edge first, it makes a beeline for the shore. The Clever strategy requires the dog to travel 1.2093 * PI * the duck’s speed to prevent takeoff.

Finally, my last strategy was the Smart strategy. This is what I submitted for an answer, but I will be shocked if it is right. The Smart strategy acts like the Clever strategy except the duck will scan a range of spots on the shore and if ANY of them can be reached before the dog, it will beeline. It represents only a small improvement over Clever, requiring the dog to travel 1.2552 * PI * the duck’s speed to trap the fowl.

A couple of things to note here. When the duck escapes, the dog is often so close that it looks like they are overlapping, but this is an abstract duck with no size, only a location, so unless the dog is exactly at the right spot, the duck escapes if it crosses the 1 unit border of the pond. Also, my strategy for the dog is to just go to the point on the edge of the pond that is closest to the duck and wait like a good boy, who’s a good boy, you’re a good boy. I’m not sure if that’s optimal but any movement from that spot seems like a mistake until the duck changes course or crosses the center point. This is one are where I think I am probably wrong.

The other area where I ~~could be~~ am probably wrong is in the way that I have the Smart duck scan the shore for a likely escape route. I just choose a range of shoreline and increment test locations in discrete chunks looking for a candidate. At the very least, a mathy solution would do that continuously, not discretely.

I wrote this app using MFC and Gdiplus so I’m not going to include the whole project here. I’ll just post the pond.h and pond.cpp files, which is where all the calculations and simulation are being done. Everything else is just presentation.

First off, I assume the pond has a radius of 1 unit with its center at 0,0, and that the duck moves at a rate of 1 unit/second. I store an x- and y-variable for the duck and start it at the center of the pond.

I could have used x- and y-variables for the dog, but since it always moves along the edge of the circular pond, I instead store its position as a single real value from -PI to PI. (It really should be 0 to TAU, because PI is wrong, no really, but whatevs.) I start the dog at 0, meaning the East edge of the pond.

The dog’s speed is expressed in terms of the duck’s speed (1/sec) * TAU/2 (okay, fine, PI.) So if the dog moves at PI * the duck’s speed, he can make a half orbit of the pond by the time the duck moves from the center to the edge. Using these basic units makes the code very simple since the time required for the duck to reach the edge is equal to 1 minus its distance from the center of the pond. The time required for the dog to reach any point of the pond’s edge is simply the angle to that point minus the dog’s position divided by the dog’s speed.

I also created a simple DPoint class with some convenient operators to make things a little easier.

// Pond.h #pragma once #include <math.h> #define PI 3.1415926535897932384626433832 #define TAU (PI * 2) enum EDuck { DumbDuck, CowardlyDuck, CleverDuck, SmartDuck, }; struct DPoint { double x; double y; void Set(double _x, double _y) {x = _x; y = _y;} double Magnitude() const { return sqrt(x * x + y * y); } }; static DPoint operator -(const DPoint &l, const DPoint &r) { DPoint pt; pt.x = l.x - r.x; pt.y = l.y - r.y; return pt; } static DPoint operator +(const DPoint &l, const DPoint &r) { DPoint pt; pt.x = l.x + r.x; pt.y = l.y + r.y; return pt; } static DPoint operator *(const DPoint &l, const DPoint &r) { DPoint pt; pt.x = l.x * r.x; pt.y = l.y * r.y; return pt; } static DPoint operator *(const DPoint &l, double scalar) { DPoint pt; pt.x = l.x * scalar; pt.y = l.y * scalar; return pt; } class Pond; class DuckStrategy { public: virtual DPoint GetDestinationVector(Pond *pond) = 0; }; class DumbDuckStrategy : public DuckStrategy { public: virtual DPoint GetDestinationVector(Pond *) { DPoint pt; pt.x = -1.0; pt.y = 0; return pt; } }; class CowardlyDuckStrategy : public DuckStrategy { public: virtual DPoint GetDestinationVector(Pond *pond); }; class CleverDuckStrategy : public CowardlyDuckStrategy { public: virtual DPoint GetDestinationVector(Pond *pond); }; class SmartDuckStrategy : public CowardlyDuckStrategy { public: virtual DPoint GetDestinationVector(Pond *pond); }; class Pond { double m_dogPos; // in radians, 0-tau double m_dogSpeed; // in radians/sec DPoint m_duckPos; DPoint m_lastDuckVector; EDuck m_behavior; bool m_escaped; bool m_beeline; double m_elapsed; double m_beelineTime; CowardlyDuckStrategy m_cowardly; DuckStrategy *m_strategy; public: Pond() : m_dogPos(0) { SetStrategy(&m_cowardly); Restart(PI); m_escaped = true; // so we have to hit restart to start } void Restart(double dogSpeed) { m_beeline = false; m_elapsed = 0; m_beelineTime = 0; m_dogPos = 0; m_dogSpeed = dogSpeed; m_escaped = false; m_duckPos.Set(0, 0); m_lastDuckVector.Set(0, 0); } void SetStrategy(DuckStrategy *strat) { m_strategy = strat; } void SetBeeline() {m_beeline = true; m_beelineTime = m_elapsed;} bool GetBeeline() const {return m_beeline;} double GetElapsed() const {return m_elapsed;} DPoint GetDuckPos() const {return m_duckPos;} double GetDogPolar() const {return m_dogPos;} double GetDogSpeed() const {return m_dogSpeed;} DPoint GetLastDuckVector() const {return m_lastDuckVector;} DPoint GetDogPos() const { DPoint pos; pos.x = cos(m_dogPos); pos.y = sin(m_dogPos); return pos; } double GetDogTimeToDest(double dest) const; double GetDuckTimeToDest(double dest) const; void Process(double sec) { if(m_escaped || m_strategy == NULL) return; // get duck vector DPoint duckVec = m_strategy->GetDestinationVector(this); m_lastDuckVector = duckVec; // get dog destination double mag = m_duckPos.Magnitude(); double dogDest = PI; if(mag != 0.0) { double s = m_duckPos.y / mag; double c = m_duckPos.x / mag; dogDest = atan2(s, c); } // update duck position duckVec = duckVec * sec; m_duckPos = duckVec + m_duckPos; mag = m_duckPos.Magnitude(); if(mag >= 1.0) { double overage = mag - 1.0; m_duckPos = m_duckPos * (1 / mag); sec -= overage; m_escaped = true; } m_elapsed += sec; if(mag < sec) { m_duckPos.Set(0, 0); } // update dog position double dogDelta = dogDest - m_dogPos; if(dogDelta < -TAU) dogDelta += TAU; else if(dogDelta > TAU) dogDelta -= TAU; double dogProgress = m_dogSpeed * sec; if(dogProgress > abs(dogDelta)) { m_dogPos = dogDest; } else { m_dogPos += dogProgress; if(m_dogPos > PI) m_dogPos -= TAU; else if(m_dogPos < -PI) m_dogPos += TAU; } } };

// Pond.cpp #include "stdafx.h" #include "Pond.h" DPoint CowardlyDuckStrategy::GetDestinationVector(Pond *pond) { DPoint dogPos = pond->GetDogPos(); DPoint duckPos = pond->GetDuckPos(); DPoint pt = duckPos - dogPos; double mag = pt.Magnitude(); if(mag == 0) { pt.x = -1.0; pt.y = 0; } else { pt.x /= mag; pt.y /= mag; } return pt; } DPoint CleverDuckStrategy::GetDestinationVector(Pond *pond) { DPoint curVec = pond->GetDuckPos(); double mag = curVec.Magnitude(); if(mag == 0) return CowardlyDuckStrategy::GetDestinationVector(pond); double toGoal = 1.0 - mag; double dogPos = pond->GetDogPolar(); double s = curVec.y / mag; double c = curVec.x / mag; double dogDest = atan2(s, c); double dogTime = pond->GetDogTimeToDest(dogDest); if(dogTime > toGoal) { pond->SetBeeline(); curVec = curVec * (1 / mag); return curVec; } return CowardlyDuckStrategy::GetDestinationVector(pond); } DPoint SmartDuckStrategy::GetDestinationVector(Pond *pond) { DPoint curVec = pond->GetDuckPos(); double mag = curVec.Magnitude(); if(pond->GetBeeline()) { return pond->GetLastDuckVector(); } if(mag == 0) return CowardlyDuckStrategy::GetDestinationVector(pond); double s = curVec.y / mag; double c = curVec.x / mag; double dest = atan2(s, c); double start = dest; double end = dest + PI / 8.0; double inc = (end - start) / 100; for(double test = start; test <= end; test += inc) { double duckTime = pond->GetDuckTimeToDest(test); double dogTime = pond->GetDogTimeToDest(test); if(duckTime < dogTime) { DPoint pt; pt.Set(cos(test), sin(test)); pt = pt - curVec; pt = pt * (1 / pt.Magnitude()); double one = pt.Magnitude(); pond->SetBeeline(); return pt; } } return CowardlyDuckStrategy::GetDestinationVector(pond); } double Pond::GetDuckTimeToDest(double dest) const { DPoint pt; pt.x = cos(dest); pt.y = sin(dest); pt = pt - m_duckPos; return pt.Magnitude(); } double Pond::GetDogTimeToDest(double dogDest) const { double dogPos = GetDogPolar(); double dogDelta = dogDest - dogPos; if(dogDelta < -PI) dogDelta += TAU; else if(dogDelta > PI) dogDelta -= TAU; double degrees = dogDelta * 360.0/PI; dogDelta = abs(dogDelta); ASSERT(dogDelta <= PI); double dogTime = dogDelta / GetDogSpeed(); return dogTime; }

I chose to wrap most of the code for my pond in a single object that owns the duck and the dog. I could have chosen to make separate classes for the duck and dog and maybe have multiple ducks and multiple dogs each taking different approaches but then I thought, what if I didn’t instead.

As I said, I don’t think I got this one right, so I’ll be really interested to see what Ollie posts as the answer next Friday. Thanks for reading!

]]>