Jump to content
  • Welcome to the eG Forums, a service of the eGullet Society for Culinary Arts & Letters. The Society is a 501(c)3 not-for-profit organization dedicated to the advancement of the culinary arts. These advertising-free forums are provided free of charge through donations from Society members. Anyone may read the forums, but to post you must create a free account.

When your cooking skills leave you


Fat Guy

Recommended Posts

On "statistical models" and 'slumps' in cooking and baseball. Or is Steven 'losing it'? Are the women suffering from PMS? Is something wrong with the cooking, or is everything okay?

Let's start with 'regression toward the mean':

Suppose we find 100 excellent chefs, with skills far above the average for chefs, and then look at the skills of, say, their apprentices or children.

Will the skills of the apprentices or children also be as good as those of the 100 top chefs? Typically no. The skills of the apprentices or children will be closer to the average for all chefs than are the skills of the 100 top chefs.

This phenomenon is called 'regression toward the mean' (where 'mean' is synonymous with 'average').

Here is one case where the cause of regression toward the mean is obvious:

Send 1000 people to Las Vegas each with $1000 (to be clear, I'm NOT offering to fund this experiment!) to play slot machines for a day or until they lose their $1000 whichever comes first. Take the 100 people who did the best.

Give each of these 100 people $1000 again and repeat -- have them play the slot machines for a day or until they lose the $1000, whichever comes first.

Now separately for each of the 1000 people, the 100 best of the 1000, and the second effort of the 100, find the average winnings. Call these averages, respectively, X, Y, and Z. Then typically Z < Y and about half the time even Z < X. So, Z moves from Y down toward X and about half the time is actually less than X. This is and example of 'regression toward the mean', that is, Z, from the second effort of the winners from the first effort, moves toward the mean X of the first efforts of the 1000.

The cause: On the first effort, The 100 best of the 1000 were just lucky, and their luck didn't hold on the second effort.

Or more generally, given an effort with some exceptionally good results, likely some of the cause of those results was just luck so that on the next effort usually the performance will 'regress toward the mean' and be less good.

So, I conclude that regression toward the mean has nothing to do with the cooking of Steven and the women, 'slumps' in baseball, etc., set this topic aside, and move on to something that can get us some progress.

For hot and cold streaks in baseball and Steven's cold streak in the kitchen, here is a probabilistic (not really 'statistical') explanation:

Suppose things are arriving one at a time and we notice when they arrive and count the number of arrivals so far. We start counting at time 0 with 0 arrivals. Suppose after time t >= 0, have N(t) arrivals.

Note: Yes, for each value of t, the count N(t) is what we observe on one 'trial' of an 'experiment' that hypothetically might have been performed many times.

We make two assumptions:

Independent Increments: For time s >= 0, we assume that the 'increment' in arrivals, N(t + s) - N(t), that is, the number of arrivals in time s starting at time t, is 'independent' of all N(u) for u <= t. The intuitive definition of 'independent' is 'has nothing to do with' or 'knowing N(u) for u <= t' does not 'help' in predicting the increment N(t + s) - N(t).

Note: A more precise definition would take us into 'currents of sigma-algebras', and let's not go there here.

Stationary Increments: The probability distribution of the increment N(t + s) - N(t) depends only on s and is the same for all t.

From these 'qualitative' assumptions we can show that there must exist some number r >= 0, that we call the 'arrival rate', so that, for each whole number k = 0, 1, 2, ..., the probability of N(t) = k is just

P( N(t) = k ) = exp(-rt) (rt)^k / k!

where exp(-rt) is the constant e ~ 2.71828183 (thank you, Google!) raised to the power (-rt). Also k! is a 'factorial', that is, the product

k! = (1) (2) ... (k)

To check, we might recall that for x >= 0

exp(x) = 1 + x + x^2 / 2! + x^3 / 3! + ...

Then 1 = exp(-rt) exp(rt) so that

1 = P( N(t) = 0 ) + P( N(t) = 1 ) + P( N(t) = 2 ) + ...

as we want. How 'bout that!

The whole collection of N(t) for t >= 0 is a 'Poisson process' with 'arrival rate' r and

P( N(t) = k ) = exp(-rt) (rt)^k / k!

is the 'Poisson' distribution with parameter rt.

If we let T(k) be the time of arrival k, then it follows that

P( T(1) <= t ) = 1 - exp(-rt)

So here we have the cumulative distribution of the time until the first arrival. This distribution is the 'exponential' distribution with parameter r.

It turns out, curiously, the distribution of T(1) is also the distribution of the time of the next arrival counting from any time! That is, the Poisson process 'has no memory'. That is, if just had an arrival or have been waiting for an hour without an arrival, then the distribution of the time until the next arrival is the same! Or, even if the arrival rate is one an hour and you have been waiting an hour, can't conclude that the next arrival is 'due real soon, now, y'hear?'.

The corresponding probability density function of T(1) is

r exp(-rt)

It follows that the expectation ('mean', 'average') of T(1) is

E[T(1)] = 1/r

Similarly the expectation of T(k), the time of the k-th arrival, is

E[T(k)] = k/r

and the expectation of N(t) is

E[N(t)] = rt

So, the quantity r does look like an average 'arrival rate': In time t, the average number of arrivals is just rt. Or, in time t = 1, the average number of arrivals is just r.

Our assumptions of stationary and independent increments provide a qualitative 'axiomatic' derivation of the Poisson process. This derivation is nice because often in practice we can have some confidence in the assumptions of stationary and independent increments just intuitively. Or, "Look, Ma! No data!". Or, just do it all with intuitive hand waving! That is, the assumptions are all qualitative.

There are similar qualitative axiomatic derivations with slightly weaker assumptions due to each of S. Watanabe and A. Renyi.

Exercise: Check that T(k) <= t exactly when N(t) >= k so that

P( T(k) <= t ) = P( N(t) >= k ).

It turns out that the times between arrivals:

T(1), T(2) - T(1), T(3) - T(2), ...

are independent and have the same distribution

1 - exp(-rt)

So, let's connect 'slumps' in Steven's cooking and baseball:

We let an 'arrival' be a good dish cooked in the kitchen or a base hit in baseball.

To 'test' for a 'slump', let's tentatively entertain the assumptions of stationary and independent increments.

Then the arrival times of good kitchen or baseball results will be just

T(1), T(2), T(3), ...

Then looking at these times, curiously, commonly people guess that the arrivals are in 'bunches' or 'clumps' separated with some long empty periods or 'slumps'.

That is, people can observe such bunches and slumps with just a Poisson process with stationary and independent increments and where the times between arrivals are independent with the same distribution, that is, without any underlying 'cause'.

So, when we see 'bunches' and 'slumps', we might just be looking at a silly, meaningless Poisson process instead of any real cause.

So, before we insist on finding a cause, we need better evidence than just intuitive eyeball 'bunches' and 'slumps'.

For Steven, there is no evidence that anything is wrong! So, Steven, given the data, we reject the hypothesis that you are 'losing it' and conclude that you are healthy after all! :smile:

For the women on this thread, the cooking problems do not have to be caused by the conjectured 'women's problems'! Sorry, men! Maybe there actually are 'women's problems', but one week of burned stew, omitted ingredients, spilled milk, etc. is not good evidence!

Men, I KNOW what sad news this can be! :smile:

But, wait, there's more! Suppose we are running a restaurant serving a city of some hundreds of thousands of people. For each person, when they go to that restaurant is an arrival process although likely not a Poisson process. But if the people act independently (not always reasonable) and satisfy a mild additional assumption, then on Saturday night from 7 to 8 PM the arrivals will nicely approximate a Poisson process. Here we are using the 'renewal theorem' that a sum of independent arrival processes converges to a Poisson process as the number of processes grows.

Similarly for the number of arrivals at the eG Web site.

Yes, to make this work, we have to pick a time interval where the arrival rate remains constant or generalize slightly to 'non-stationary' Poisson processes.

Net, with no data at all, we concluded that Steven and the women are all okay! How 'bout that! That has to be the end of the movie! :smile:

Postscript: Never mentioned the Gaussian distribution!

What would be the right food and wine to go with

R. Strauss's 'Ein Heldenleben'?

Link to comment
Share on other sites

In short form: If you're at the bottom, performance-wise, there's no place to go but up. Conversely, if you're at the top, there's no place to go but down. :laugh:

"Commit random acts of senseless kindness"

Link to comment
Share on other sites

Thank you, slkinsey & project, for the statistical explanation. At one point in my life, I would have understood what you're saying. I had to study statistics in grad school. I managed to learn enough to pass the exam. But after the exam, upon leaving the classroom, I tilted my head to one side & all the statistics fell out. End of statistical knowledge for the rest of my life.

But to return to the topic--

If I feel tense or distracted, my cooking skills go kablooey. It is actually better for me to buy takeout at these times. Then I won't obsess over what's really bothering me and the bad food I'm cooking. Right now I'm pushing a project deadline. Last week I blew a batch of potato bread, a batch of chicken stock (how can you mess up chicken stock? don't ask) and something else that took a long time to prepare, but I burned it at the last step--I don't remember what the dish was now, because I've repressed the memory.

But the silver lining is that you realize who your friends are. I gave a loaf of the potato bread to a friend, & said, It tastes fine, even if it doesn't look so good. He said, That's OK, I'll eat it with my eyes closed. You see? A true friend.

Link to comment
Share on other sites

In short form:  If you're at the bottom, performance-wise, there's no place to go but up.  Conversely, if you're at the top, there's no place to go but down.  :laugh: 

Yes, this is similar to what I said about 'regression to the mean'.

This can explain: If Steven had a fantastic week cooking or a hitter in baseball hit 1000 for each of the past five games, then we expect the next week to be less good!

But this does not explain Steven's cooking problems, 'slumps' in baseball, or the cooking problems the women attributed to PMS, etc.

That is, if each of these persons has been doing fine and now for a whole week has been messing up, then is this evidence of a problem? That is, is Steven 'losing it'? Is there a sad cause?

I argue that long periods of no good results and bunches of normal or better results do not have to have a cause, and this is quite different from regression toward the mean.

What would be the right food and wine to go with

R. Strauss's 'Ein Heldenleben'?

Link to comment
Share on other sites

Years ago I read an article claiming that the idea hitting streaks, or slumps, were the result of a batter somehow going "hot" or "cold" was a myth.  Essentially, a batter's performance is a collection of random events unpredictable in the short run but statistically consistent in the long run.  If you hit .300, we know that over the course of a season, you'll get a base hit in three out of every ten at-bats.  But because each at-bat is a random occurrence (more or less), sometimes you're going to go 18-for-25 over the course of a critical week in August, and sometimes you're going to go 3-for-25.  Not because you were "on" in the first instance and "off" in the second, but because that's the way random events accumulate.

There are two statistical models that might appply here.

The first is regression to the mean. How this works is that you have a basic level of doing a certain task, this could be cooking a steak or making a base hit three times out of every ten. Some times you will do much better than your "mean" level. This might mean cooking a truly perfect steak, or it might mean going three for four against (hopefully) the Yankees. The opposite could happen as well, and you could have an unusually bad performance -- burning the steaks or going zero for four against (hopefully) the Red Sox. These unusually good or bad performances relative to a mean skill level (or score or whatever) have a certain statistical probability which can be characterised by the normal distribution. They have low probability of happening, but not zero probability of happening. The laws of probability say that your next performance after a statistically improbable performance is likely to be a more statistically probable performance, because it is always true that the most likely performance is the most statistically probable one. What this means is that you are likely to follow a particularly good performance with one that was not quite as good, and you are likely to follow a particularly poor performance with one that is better -- this is because you are likely to give a performance that is closer to your statistical mean performance.

The second contains most of the same principles. The normal distribution says that if you have a certain skill level (let's say Roger Maris' 27 home runs per year average) that there is a certain statistical probability, albeit very small, that you will have a season or a streak that exceeds that average performance by quite a lot (Maris' famous 61 HR season). You can actually do the statistical analysis to see how many seasons by how many players at various average levels would have to be played in order to produce one who had a statistically improbable 60+ home run season, which explains why it took so long to break Ruth's record. Of course, for the following seasons we normally see . . . regression to the mean. In culinary terms, this explains how someone whose steak-grilling skills are good enough to cook 4 out of 5 perfectly will sometimes cook 20 perfect steaks in a row, and sometimes 20 bad ones in a row.

Apparently someone paid more attention in Stats than I.

Google :wink:

Link to comment
Share on other sites

On "statistical models" and 'slumps' in cooking and baseball.  Or is Steven 'losing it'?  Are the women suffering from PMS?  Is something wrong with the cooking, or is everything okay?

Let's start with 'regression toward the mean':

Suppose we find 100 excellent chefs, with skills far above the average for chefs, and then look at the skills of, say, their apprentices or children.

Will the skills of the apprentices or children also be as good as those of the 100 top chefs?  Typically no.  The skills of the apprentices or children will be closer to the average for all chefs than are the skills of the 100 top chefs.

This phenomenon is called 'regression toward the mean' (where 'mean' is synonymous with 'average').

Here is one case where the cause of regression toward the mean is obvious:

Send 1000 people to Las Vegas each with $1000 (to be clear, I'm NOT offering to fund this experiment!) to play slot machines for a day or until they lose their $1000 whichever comes first.  Take the 100 people who did the best.

Give each of these 100 people $1000 again and repeat -- have them play the slot machines for a day or until they lose the $1000, whichever comes first.

Now separately for each of the 1000 people, the 100 best of the 1000, and the second effort of the 100, find the average winnings.  Call these averages, respectively, X, Y, and Z. Then typically Z < Y and about half the time even Z < X. So, Z moves from Y down toward X and about half the time is actually less than X. This is and example of 'regression toward the mean', that is, Z, from the second effort of the winners from the first effort, moves toward the mean X of the first efforts of the 1000.

The cause:  On the first effort, The 100 best of the 1000 were just lucky, and their luck didn't hold on the second effort.

Or more generally, given an effort with some exceptionally good results, likely some of the cause of those results was just luck so that on the next effort usually the performance will 'regress toward the mean' and be less good.

So, I conclude that regression toward the mean has nothing to do with the cooking of Steven and the women, 'slumps' in baseball, etc., set this topic aside, and move on to something that can get us some progress.

For hot and cold streaks in baseball and Steven's cold streak in the kitchen, here is a probabilistic (not really 'statistical') explanation:

Suppose things are arriving one at a time and we notice when they arrive and count the number of arrivals so far.  We start counting at time 0 with 0 arrivals.  Suppose after time t >= 0, have N(t) arrivals.

Note:  Yes, for each value of t, the count N(t) is what we observe on one 'trial' of an 'experiment' that hypothetically might have been performed many times.

We make two assumptions:

Independent Increments:  For time s >= 0, we assume that the 'increment' in arrivals, N(t + s) - N(t), that is, the number of arrivals in time s starting at time t, is 'independent' of all N(u) for u <= t. The intuitive definition of 'independent' is 'has nothing to do with' or 'knowing N(u) for u <= t' does not 'help' in predicting the increment N(t + s) - N(t).

Note:  A more precise definition would take us into 'currents of sigma-algebras', and let's not go there here.

Stationary Increments:  The probability distribution of the increment N(t + s) - N(t) depends only on s and is the same for all t.

From these 'qualitative' assumptions we can show that there must exist some number r >= 0, that we call the 'arrival rate', so that, for each whole number k = 0, 1, 2, ..., the probability of N(t) = k is just

P( N(t) = k ) = exp(-rt) (rt)^k / k!

where exp(-rt) is the constant e ~ 2.71828183 (thank you, Google!) raised to the power (-rt).  Also k! is a 'factorial', that is, the product

k!  = (1) (2) ...  (k)

To check, we might recall that for x >= 0

exp(x) = 1 + x + x^2 / 2!  + x^3 / 3!  + ...

Then 1 = exp(-rt) exp(rt) so that

1 = P( N(t) = 0 ) + P( N(t) = 1 ) + P( N(t) = 2 ) + ...

as we want.  How 'bout that!

The whole collection of N(t) for t >= 0 is a 'Poisson process' with 'arrival rate' r and

P( N(t) = k ) = exp(-rt) (rt)^k / k!

is the 'Poisson' distribution with parameter rt.

If we let T(k) be the time of arrival k, then it follows that

P( T(1) <= t ) = 1 - exp(-rt)

So here we have the cumulative distribution of the time until the first arrival.  This distribution is the 'exponential' distribution with parameter r.

It turns out, curiously, the distribution of T(1) is also the distribution of the time of the next arrival counting from any time!  That is, the Poisson process 'has no memory'.  That is, if just had an arrival or have been waiting for an hour without an arrival, then the distribution of the time until the next arrival is the same!  Or, even if the arrival rate is one an hour and you have been waiting an hour, can't conclude that the next arrival is 'due real soon, now, y'hear?'.

The corresponding probability density function of T(1) is

r exp(-rt)

It follows that the expectation ('mean', 'average') of T(1) is

E[T(1)] = 1/r

Similarly the expectation of T(k), the time of the k-th arrival, is

E[T(k)] = k/r

and the expectation of N(t) is

E[N(t)] = rt

So, the quantity r does look like an average 'arrival rate':  In time t, the average number of arrivals is just rt.  Or, in time t = 1, the average number of arrivals is just r.

Our assumptions of stationary and independent increments provide a qualitative 'axiomatic' derivation of the Poisson process.  This derivation is nice because often in practice we can have some confidence in the assumptions of stationary and independent increments just intuitively.  Or, "Look, Ma!  No data!".  Or, just do it all with intuitive hand waving!  That is, the assumptions are all qualitative.

There are similar qualitative axiomatic derivations with slightly weaker assumptions due to each of S. Watanabe and A. Renyi.

Exercise:  Check that T(k) <= t exactly when N(t) >= k so that

P( T(k) <= t ) = P( N(t) >= k ).

It turns out that the times between arrivals:

T(1), T(2) - T(1), T(3) - T(2), ...

are independent and have the same distribution

1 - exp(-rt)

So, let's connect 'slumps' in Steven's cooking and baseball:

We let an 'arrival' be a good dish cooked in the kitchen or a base hit in baseball.

To 'test' for a 'slump', let's tentatively entertain the assumptions of stationary and independent increments.

Then the arrival times of good kitchen or baseball results will be just

T(1), T(2), T(3), ...

Then looking at these times, curiously, commonly people guess that the arrivals are in 'bunches' or 'clumps' separated with some long empty periods or 'slumps'.

That is, people can observe such bunches and slumps with just a Poisson process with stationary and independent increments and where the times between arrivals are independent with the same distribution, that is, without any underlying 'cause'.

So, when we see 'bunches' and 'slumps', we might just be looking at a silly, meaningless Poisson process instead of any real cause.

So, before we insist on finding a cause, we need better evidence than just intuitive eyeball 'bunches' and 'slumps'.

For Steven, there is no evidence that anything is wrong!  So, Steven, given the data, we reject the hypothesis that you are 'losing it' and conclude that you are healthy after all!   :smile:

For the women on this thread, the cooking problems do not have to be caused by the conjectured 'women's problems'!  Sorry, men!  Maybe there actually are 'women's problems', but one week of burned stew, omitted ingredients, spilled milk, etc. is not good evidence!

Men, I KNOW what sad news this can be!   :smile:

But, wait, there's more!  Suppose we are running a restaurant serving a city of some hundreds of thousands of people.  For each person, when they go to that restaurant is an arrival process although likely not a Poisson process.  But if the people act independently (not always reasonable) and satisfy a mild additional assumption, then on Saturday night from 7 to 8 PM the arrivals will nicely approximate a Poisson process.  Here we are using the 'renewal theorem' that a sum of independent arrival processes converges to a Poisson process as the number of processes grows.

Similarly for the number of arrivals at the eG Web site.

Yes, to make this work, we have to pick a time interval where the arrival rate remains constant or generalize slightly to 'non-stationary' Poisson processes.

Net, with no data at all, we concluded that Steven and the women are all okay!  How 'bout that!  That has to be the end of the movie!   :smile:

Postscript:  Never mentioned the Gaussian distribution!

I wonder what it says about my geek cred that I found that not only interesting but funny.....

edit to add: Don't worry, it happens to the best of us. I could spin stories for days on the things I have botched up.

Edited by CKatCook (log)

"I eat fat back, because bacon is too lean"

-overheard from a 105 year old man

"The only time to eat diet food is while waiting for the steak to cook" - Julia Child

Link to comment
Share on other sites

Most of the time, I'm what might be called a pretty good advanced-amateur cook. Most of what I serve is good, some of it is just okay, and rarely do I turn out a dish I'm embarrassed to serve. And, being my own harshest critic, I'm pretty sure I get an accurate read on the quality of what I produce.

But for the last few days, my cooking skills have somehow left me. Sure, on any given day I can mess something up. But I've messed up every single thing I've cooked since Monday.

Last night I roasted salmon in the oven and made potato wedges at the same time. The salmon was overcooked, the potato wedges were borderline charred on the outside and nearly raw in their centers. The night before I tried to make pizza dough and it just didn't rise. I burnt toast at breakfast time because brioche toasts really fast and I walked away and forgot about it. I messed up setting the timer and cooked spaghetti to mush. I did a poor job washing lettuce and ended up with a weird, sandy salad. (We're still talking about the "sand salad" four days later, and maybe forever.) Etc.

Has this ever happened to you? All of a sudden your cooking skills leave you? I hope they come back, at least.

Reminds of a Holiday several years ago. I had just started dating a woman and told her that I would bring her personal favorite, bread pudding, to her home for a potluck. I went all out, homemade brioche dough, etc. Well, to serve everyone I doubled the recipe and then divided the dough in 1/2 to cook. I have no idea how the yeast did it, but everyone of them migrated to one loaf leaving the other abandoned of the "bread magic." I was left w/ brioche dough stuck all over the oven from one bread and a bacon press from the other. Oh, well.

Without time to make the brioche again and be there on time, I made 2 apple galletes. In my ambition to make a great impression I somehow forgot to chill the dough prior to rolling. ERgo, the dough spread like an eagle's wings. Since I used a cookie sheet, there was apple/sugar/brioche dough ALL over my oven.

I showed up in humility empty handed. Happens sometimes.

Tom Gengo

Link to comment
Share on other sites

×
×
  • Create New...