Showing posts with label QS. Show all posts
Showing posts with label QS. Show all posts

Friday, October 16, 2015

Hunter-gatherers sleep like me

Anthropologists Gandhi Yetish, Hillard Kaplan and their colleagues just published the results of some experiments that show hunter gathers get much less sleep than the eight hours we supposedly need. In fact, their sleep patterns closely resemble mine, despite the conventional wisdom that my average 6.5 hours per night is too little.  (There’s a nice summary by Anahad O'Connor in a New York Times Blog post)

A long-time fan of Zeo, I have carefully measured my sleep for many years and I know that my lifetime average is pretty close to 6.5 hours. That’s real sleep, measured by a brain wave detector strapped to my head. Like other lazy people, I sometimes lay in bed longer than that, but it’s a rare occasion when my sleep duration is longer than 7 hours, even when I’m loaded with potato starch to grow serotonin-helping bifidobacterium.

Because the new study appears to contradict the rest of conventional scientific wisdom about the importance of 8+ hours of sleep, I read it carefully, along with the data collected to see if I could spot any problems.  So far I think everything adds up:

Plenty of participants:  100 people, spread among male/female at different ages, including some fairly old: 60+

Three separate, unrelated societies: from both Africa and South America, it’s hard to argue these people are somehow related at anything other than being hunter-gatherers.

Week-long observations: You might want a study like this to go on for weeks or years, but I think the duration, from a week to a month per person was just fine. 

Good self-tracking hardware: the anthropologists used the Philips Actiwatch 2, strapped to subjects’ wrists with a tamper-proof hospital band. These are well-studied, medical-grade wearables and although they use actigraphy information, not perfect because it’s based on movements in the night, if anything these devices tend to overestimate the amount of sleep. I skimmed the data from the study and it looks good.

The authors conclude that ambient temperature, not daylight, is the most important signal that tells these hunter-gatherers it’s time to sleep. They note that these people sleep no the ground on skin mats, inside huts or in the outdoors, often covered with lightweight cotton blankets. This isn’t all that different from camping, when I tend if anything to sleep more

Interestingly, when Zeo studied 5000 of their users back in 2011, they found an average sleep time of something closer to 8 hours, with my 6.5 hours falling out of the 95% confidence interval, making me (and the hunter gatherers) real outliers.

Note that although I rarely sleep longer than 6.5 hours, I feel great in the morning and I’m generally alert and feel reasonably fresh all day. Like the hunter-gatherers, I don’t nap and I rarely suffer from insomnia.

I’ll be watching the follow-ups to this research carefully but for now I’ll be much more satisfied that my current level of sleep is just fine.

 

 

 

 

 

Monday, July 20, 2015

For potato starch, maybe less is more?

A friend who is enthusiastic about the affect potato starch had on his sleep, suggested that maybe my less-than-stellar results were caused by the amount I had been taking. Instead of 3-4 TBS/day, try a much smaller amount, he suggested, maybe only a teaspoon or so.

My results are too preliminary to get excited yet, but at least for my short trial, the smaller amount seems to help. Interestingly, my overall sleep doesn't change much, but I do notice more dreams, and Zeo confirms that my REM sleep is up quite a bit.

Here's the raw data (dumped straight from the R software I use to track everything). The one marked in red below is the most interesting number:

I tried potato starch on 91 days, and I have Zeo sleep data for a total of 45 of those days.

On 19 days I took exactly one tablespoon. On 6 days I took more than 0 but less than 1 tablespoon.

For total sleep (Z):

  • P-value on days when I had any potato starch: 0.3109656
  • P-value on days when I had exactly 1 TBS: 0.2020084
  • P-value on days when I had more than 0 but less than 1 TBS: 0.3041962
For REM Sleep (REM):

  • P-value on days when I had any potato starch: 0.2505854
  • P-value on days when I had exactly 1 TBS: 0.0603005
  • P-value on days when I had more than 0 but less than 1 TBS: 0.0012399
For Deep Sleep (Deep):

  • P-value on days when I had any potato starch: 0.5148044
  • P-value on days when I had exactly 1 TBS: 0.7402774
  • P-value on days when I had more than 0 but less than 1 TBS: 0.3264305


##                   days   Z.Mean REM.Mean Deep.Mean      Z.SD
## 0                  128 6.364245 1.817969  1.048698 0.6971406
## 0.25                 1 7.000000 2.100000  1.133333        NA
## 0.333333333333333    5 6.526667 2.136667  1.096667 0.5198290
## 1                   16 6.609979 1.975000  1.064583 0.7015906
## 1.5                  1 6.283333 1.300000  1.083333        NA
## 2                    7 6.111905 1.676190  1.019048 0.6943365
## 2.5                  1 5.750000 1.983333  1.250000        NA
## 3                    5 6.873333 2.050000  1.036667 0.6796241
## 4                    8 6.402083 1.752083  1.068750 0.7167186
## 8                    1 6.000000 1.433333  1.250000        NA

Monday, July 06, 2015

Seth Roberts Rules for Self-Experimenters

Digging through the blog of the late Seth Roberts, I find many gems. Here is his concise summary of how to do self-experimentation. He mentions intending to write a book about this, but as far as I know it was never finished:
if you want to figure something out via data collection:
1. Do something. Don’t give up before starting.
2. Keep doing something. Science is more drudgery than scientists usually say.
3. Be minimal.
4. Use scientific tools (e.g., graphs), but don’t listen to scientists who say don’t do X or Y.
5. Post your results.
 Worth reading the entire four-part blog post.

Wednesday, July 01, 2015

My QS Seattle Talk: Cholesterol and my Microbiome

My presentation at the July meeting of Quantified Self Seattle, with more details about the A/B experiment I wrote about on the uBiome blog.


As always the best part of these presentations is the question and answer period, and the mingling that happens long after the formal talk. I met several people who gave me their own microbiome data right on the spot, and I was able to quickly analyze them with my uBiome tools.

One of the attendees told me about his own potato starch experiments and how it had dramatically improved his sleep, contrary to what I found for myself. The difference, we discovered, is in amounts: he uses only a teaspoon a day, much less than the 2-4 tablespoons I had been trying. I can’t wait to try a smaller dose to see if I can get the same great effect.

Monday, June 29, 2015

My QS15 Slides

Here’s the presentation I made at the Quantified Self Conference in San Francisco last week. It doesn’t include audio, but the full text of the transcript is embedded in the notes.

All presentations at this year's conference were on a strict timer: each slide was displayed for exactly 15 seconds, in PowerPoint's automatic mode, so there was no way to go back if you missed something or rambled too long. Although that helped focus the talks and ensured everyone was well-prepared, in my case somebody's unattended cell phone started blaring about a minute into my presentation. It was very distracting and normally a speaker would need to acknowledge the interruption so the noisemaker could be silenced, but the 15-second rule required that I plod on. Hopefully when the audio is released in a few weeks, the noisy phone won't be audible, though it unfortunately meant the audience probably missed key parts of the presentation.

Anyway, I was honored to be the final Show and Tell talk, featured in the closing plenary, where I was proud to offer a small tribute to my QS mentor Seth Roberts.

Monday, June 22, 2015

What I learned at Quantified Self 2015

I’m back from two jam-packed days at the QS15 (the Quantified Self Conference) held at Fort Mason, in San Francisco, and I have a few impressions.

There were three “cool” new ideas that I thought played an outsize role at this conference:

1. All things Microbiome

(obviously I would think so). uBiome was there, including an appearance at the first day’s plenary by Jessica Richman. My tweet (and heavily retweeted) summary of the session was a quote from the first speaker: “We are the last generation without personalized medical data”. That’s true for the microbiome especially, and it was wonderful talking with so many people about their new bacterial experiments.  I’ll write more in future posts.

2. Heart Rate Variability

With better technology for measuring heart rates, many people have noticed that pulse/minute is a less useful measure than variability. Sometimes it’s more meaningful to look at how much each heart wave length varies from the others: high variability tends to be associated with creativity or improved mental processing, whereas low variability tends to be accompanied by stress or low learning situations. 

Paul LaFontaine used HRV measurements to demonstrate that he is more nervous in situations involving presentations to groups of people than he is in situations reporting to a superior. Mark Leavitt showed it as a way to measure willpower. 

3.  Direct Cranial Stimulation

I thought this was fringe stuff when I first heard of it a few years ago, but enough people have tried it that I’m starting to rethink my skepticism. JD Leadam even has a company, https://thebrainstimulator.net/ selling devices for a little over $100.

Other

I was especially impressed by a breakout session led by Evian Gordon (http://mybrainsolutions.com) who seemed to know a ton about every imaginable aspect of assessing mental performance. Anyone interested in Seth’s Brain Tracker would want to understand what those guys are doing as well. Daniel Gartenberg  is another psychology expert in attendance who I knows a lot about this subject. I had good results beta-testing an app he wrote that claims to help with deep sleep, so it was nice to talk with him in person again. 

What I didn’t see: Apple Watch.  Oh sure, there were some discussions of HealthKit and ResearchKit, but unlike QS15, which seemed to be attended by a significant percentage of the world’s Google Glass wearing population, I saw very few Apple Watches. Whether this is because the availability is still so limited or whether the QS early adopters just haven’t taken to the Watch yet — I don’t know.

I’m expecting that http://quantifiedself.com  will dish out many more details in upcoming days and weeks. Worth watching further.

Quantified Self Conference 2015

Tuesday, May 05, 2015

My gut diversity through time

Clark Ellis posts a nice summary of his uBiome results over at the uBiome Blog and now, with more detail at The Self-Taught Author blog. A long period of antibiotic use has made him acutely interested in the understanding gut diversity, so he asks others to post their uBiome diversity results too.

Here’s mine:

Untitled_Clipping_050515_102713_AM

A few caveats:

  • These values only represent the identified results, which generally bounce from about 70% (at the genus level) to 95% (phylum). There could well be dozens, perhaps thousands, of other unique bacteria that are simply too rare to be counted by the uBiome technology.
  • A single bacterium can have a big effect, so it probably doesn’t mean much to look at raw counts. Remember that the mammalian genus canis includes wolves, coyotes, and jackals in addition to your trusty dog Fido. Simply knowing there’s a canis at the door tells you nothing about whether it’s safe to go out.
  • Species information is (probably) meaningless. uBiome uses 16S rRNA technology that can’t differentiate below the genus level. They don’t even post species information on their web viewer; you have to uncover it from the raw data like I did. They claim it’s “experimental”, which I interrupt to mean they apply some statistical “guess”, perhaps based on general trends. Anyway, you shouldn’t rely on it.

Something strange happened in my June sample, which was taken three weeks after the one from May, in what was frankly a boring period of my life (no travel, no unusual food, no camping, etc.). It’s possible that result was simply a mistake.

Note: all of my data is posted on GitHub, and you’re welcome to explore it and compare to your heart’s content as long as you promise to let me know if you find anything interesting!

Saturday, April 25, 2015

Potato starch doesn't help my sleep

I've been experimenting with the relationship between sleep and resistant starch, taking a few tablespoons of Bob's Red Mill potato starch, which some people think improves sleep by feeding the helpful Bifidobacterium that may play a role in producing up to 80% of the body's serotonin.

There’s no question the potato starch raised my Bifido levels:
Sprague Bifido over time

But it seems not to have changed my sleep, either in overall hours:
Sprague Z vs potato starch
or in a more precise measure, like REM:
Sprague REM vs potato starch
You can see from these charts that I did notice some fantastic nights of sleep after starting potato starch, but there were plenty of other nights when my sleep was back to normal and sometimes worse. If I hadn’t measured so carefully, I’d be tempted to overplay the good and underplay the not-so-good. If there’s a psychosomatic effect, then yeah it may have made me feel better, but in terms of actual sleep time, resistant starch doesn’t seem to have helped.

Tuesday, April 07, 2015

Yup, statins make me smarter

After a fifteen day test, I’ve concluded that 20mg of simvastatin daily has a major effect on my results on Seth Robert’s Brain Reaction Time (BRT) test.

Statin affect on BRT

Notice the big changes on the days before and after taking the statin (the “treatment”). The two weeks before were “clean” – no fish oil, no other special vitamins, foods, travel, or other changes in daily habits – making the change even more obvious and sudden: just one day makes the difference. (The chart shows BRT measurements roughly 24 hours after treatment).
With n=33, here’s a simple T-Test to show the effect:
## Welch Two Sample t-test ## t = 7.0834, df = 28.313, p-value = 9.835e-08 ## alternative hypothesis: true difference in means is not equal to 0 ## 95 percent confidence interval: ## 15.88029 28.79244 ## sample estimates: ## mean of x mean of y ## 71.20000 48.86364
My fellow Seattle Quantified-Selfer Mark Drangsholt, who studied something similar on himself, says 2-3 weeks of treatment helped him reduce or eliminate brain fog and it appears to help me too. This is consistent with other research that shows that statins seem to benefit the brain.
Incidentally, the statin had no significant effect on my sleep (as measured with Zeo):
Sleep (n=33)Average (hrs)Standard Deviationw/Statin (n=15)
total sleep (Z) 6.418 0.625 6.424 (SD=0.58)
REM 1.796 0.416 1.814 (SD=0.44)
Deep 1.039 0.198 1.006 (SD=0.13)
I’ve already demonstrated that two or three Kirkland fish oil pills taken daily give me a statistically-significant higher score, while other obvious candidates like sleep or alcohol make no difference. Seth’s app clearly is measuring something. In my next experiments, I’ll try to pin down more precisely what that is as I refine the app to make it easier and faster to use.

Sunday, March 22, 2015

Will statins make me smarter too?

For the next two weeks, I’ll conduct a new test to measure the effect of simvastatin on my Brain Reaction Time.
Two or three Kirkland fish oil pills taken daily make me score higher on Seth Robert’s Brain Reaction Time test. After six months of self-testing, the effect is pretty robust, as you can see from this simple T-test:
## 
##  Welch Two Sample t-test
## 
## data:  rik$ptile[rik$Fish.Oil == 0] and rik$ptile[rik$Fish.Oil > 0]
## t = -2.7736, df = 90.886, p-value = 0.006728
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  -14.489218  -2.396073
## sample estimates:
## mean of x mean of y 
##  45.36170  53.80435
What causes the effect? I have some guesses related to role that Omega-3 fats play in brain nutrition, but are there other ways to get similar effects? I’ve already disproven many of the obvious other candidates (sleep, alcohol, vitamin D), but several fellow quantified-selfers have suggested I look also at statin drugs, which besides lowering cholesterol, also seem to benefit the brain. Mark Drangsholt, who studied this extensively on himself, says 2-3 weeks of treatment helped him reduce or eliminate brain fog.
So I’m going to try the same thing: for the next two weeks, I’ll take 20mg of simvastatin daily while I continue to test my BRT for changes. I’ve taken no fish oil for the past two weeks (and as predicted, my BRT averages have declined), so this should be a “clean test”.
I’m posting my trial and methodology in advance to reduce “reporting bias” that happens when people only post their successes. You can follow along with my (mostly) daily updates here: http://rpubs.com/Sprague/SimvastatinBRT

Friday, March 13, 2015

A mistake in my fourth uBiome sample?

[Update: uBiome recomputed my results, which are now much closer to what I expected.  I'll update with a more detailed post soon.]

After all that analysis and discussion with experts about my uBiome results, I had high expectations for the brand new set of answers that arrived today.

Here’s a comparison chart showing all four of my uBiome submissions:
Untitled_Clipping_031315_115444_AM
In a word: argh!

If the January 19th sample had been my first and only uBiome test, I’d be tempted to read a lot into this. After all, it appears that my levels of proteobacteria are way outside the norm. That’s not all: look at some other oddities about this one:
  • That bifido bloom I saw after sleep-hacking with potato starch: it’s all gone. Not a single bifidobacterium was found in this sample. Hmmm.
  • Lots of prevotella (almost 3% of the sample), a species that didn’t appear in any of my previous samples, and a bit worrisome for a meat-eater like me.
  • No more Clostridum, either. Commonly thought of as a pathogen, it may be good to get rid of this, but why did it disappear?
All of these massive changes in the span of only three months? Not impossible – the human gut can change pretty quickly under the right circumstances. But you’d expect something different about my environment, eating habits, and certainly my health.

But here’s the thing: I don’t notice a single difference in my health or well-being over this time period. Same sleep, same weight, same general mood. Diet, bowel movements, skin – like everyone, I see minor day-to-day variations, but absolutely nothing about me is different enough to be noteworthy.

On the other hand, there are a few oddities in the sample itself. First, uBiome warned that their first run had too low levels of bacteria; the ones you see above came after they ran the sample again under more amplified settings. Second, I used an older kit, one that had been lying around the house for about a year. Finally, I also ran into trouble with the mail, so it sat around at the post office for several more weeks than normal. Shouldn’t really matter, but still…

Soooo, my bottom line is that I’m just not going to read much into this sample. I’m waiting on my next submission, one that was sent a few weeks after this one, and hopefully that will give me a much better picture.

The takeaway for you? Don’t read much into a single uBiome test. The science is too new, and there are so many other factors that go into the results. My advice: send in multiple kits, spread over several weeks or months, before jumping to conclusions.

Thursday, February 26, 2015

Looking into my mouth microbiome

The gut biome is interesting enough, but bacteria colonize just about every part of the body, so recently I’ve been studying my uBiome mouth test results. The simple GitHub RuBiome utilities I use for analyzing my gut will work for that too, so here’s a short example of how I did it:


First I loaded my uBiome data into two variables, one for each sample: June 2014 (junMouth) and the other from October 2014 (OctMouth), after a visit to my dentist.
Let’s see which species are new in the later (October) sample:
octToJunChange <- span=""> uBiome_sample_unique(OctMouth,junMouth)
##   count                        missing.tax_name
## 1  3640                  bacterium NLAE-zl-P562
## 2  2725                 Streptococcus sanguinis
## 3  2075               Capnocytophaga gingivalis
## 4  1969 Peptostreptococcus sp. oral clone FG014
## 5  1618                 Granulicatella adiacens
One of those species, Streptococcus sanguinis looks interesting. Wikipedia says this:
S. sanguinis is a normal inhabitant of the healthy human mouth where it is particularly found in dental plaque, where it modifies the environment to make it less hospitable for other strains of Streptococcus that cause cavities, such as Streptococcus mutans.
No cavities? Nice! More good news: this quick check confirms that I don’t have any S. mutans:
OctMouth[grepl("Streptococcus",OctMouth$tax_name),]$tax_name
## [1] Streptococcus                      Streptococcus pseudopneumoniae    
## [3] Streptococcus sanguinis            Streptococcus constellatus        
## [5] Streptococcus anginosus group      Streptococcus sp. oral clone GM006
## [7] Streptococcus thermophilus         Streptococcus oralis              
## [9] Streptococcus gordonii            
## 250 Levels: [Eubacterium] sulci ... Veillonellaceae
Then I look at the species that disappeared (went extinct) between the two samples:
junToOctChange <- span=""> uBiome_sample_unique(junMouth,OctMouth)
##   count                        missing.tax_name
## 1  6034                Capnocytophaga granulosa
## 2  4153 Peptostreptococcus sp. oral clone FL008
## 3  3375         Prevotella sp. oral clone ID019
## 4  2691                   Streptococcus rubneri
## 5  1571                       Prevotella buccae
Anything in the genus Capnocytophaga is an opportunistic pathogen, so I say good riddance. Usually they’re fine, but if your immune system dips they can turn bad.
Streptococcus rubneri was discovered a couple years ago, so little is known about it.
Prevotella buccae is more interesting. It seems to be implicated in periodonal disease (yuk!) but that genus is involved too in breaking down proteins and carbohydrates.
Hard to say what’s really going on. Meanwhile, here are the biggest changes (increase) since the first sample:
junToOctCompare <- span=""> uBiome_compare_samples(junMouth,OctMouth)
##                                  tax_name count_change
## 64         Streptococcus pseudopneumoniae        62007
## 68         Veillonella sp. oral taxon 780         8065
## 41                       Neisseria oralis         4693
## 2  Abiotrophia sp. oral clone P4PA_155 P1         2308
## 28                 Granulicatella elegans         1987
Whoah! That first one, Streptococcus pseudopneumoniae, looks nasty! Wikipedia says it may cause pneumonia, though a recent medical journal says more hopefully that it “treads the fine line between commensal and pathogen”
...which is a scientific gobbleygook way of saying nobody has a clue. All the more reason to keep testing, submitting, and getting more data. I just sent two more kits to uBiome, and will let you know more as soon as I get back the results.

Tuesday, February 24, 2015

Fish oil, even when it's not a pill

An interesting result when I measured my BRT today:

Note how today’s result was noticeably higher than for the past few days.

 

BRT after eating salmon

 

I've been traveling, and I didn't have any fish oil pills, so it was odd that today's result was so much better than previous days. Was there something unusual in my diet or activity in the past 24-48 hours?

I looked back over my last couple of days eating/exercise/etc and remembered I had salmon last night for dinner.

Bingo.

This is an especially interesting result because it was entirely unexpected. I didn’t know to look for this until after I saw the results of the test.

Thursday, February 19, 2015

Fish oil makes me smarter

Fish Oil Makes Me Smarter

We all feel more “alive” on some days compared to others. Some people call it “being in the zone”, or “flow”, where you seem more responsive to the world, able to make better, faster decisions. Wouldn’t it be nice to feel that way more often, maybe even all the time?

Well, as with any attempt to improve something, the first step is to measure the effect, and then try to notice what foods or activities make it better. Unfortunately, it can be hard to tell objectively whether you have more energy than yesterday because after all, you rely on the same brain to tell you whether you feel smart or not. On days when you’re not so energetic, maybe your brain is fooling you.

The late Seth Roberts developed some simple measurement techniques that attempt to tell objectively how smart you are right now so you can compare yourself to the way you felt yesterday, or a few weeks from now, perhaps based on some new type of food you are eating. Seth and I were working on an iPhone version of this test when, tragically, he passed away, but I’ve continued to develop the software ever since and recently came upon some results that I thought were interesting.

How I measure myself

“Brain Reaction Time” (BRT) is a four-minute test that I give myself within an hour after waking up every morning. I don’t think it matters much when or where you do it, though to be as consistent as possible, I try to tie into my daily coffee-drinking ritual — a regular time and state of mind for me, before the rest of my family gets up. The BRT resembles what psychologists call a “vigilence test”, which airline pilots and others in stressful situations can take to see if they’re fit for service. But the BRT I was working on with Seth can measure things that are much more subtle, and I’ve been using it to tell how or what aspects of my life are improving my ability to focus.

Along with my BRT scores, I track another ton of variables (exercise, sleep, vitamins, food) which I enter daily in an Excel spreadsheet. I’m working to make this much more automatic using the excellent Zenobase site, but for now the important thing is just to track it however I can.

Results: Fish oil makes me smarter

I occasionally take one or two Kirkland signature brand fish pills in the morning, and of all the different things I track, I was surprised that something so simple could have such an obvious effect on my brain reaction time.

fish oil capsules

Here is the summary chart based on the past three months of testing:

The red and blue lines are the linear model, or trend lines for each variable. You can think of them like an average, the exact mid-point of all the points through time.

If you know something about statistics, we can run a simple T-test:

## 
##  Welch Two Sample t-test
## 
## data:  rik$ptile[rik$Fish.Oil == 0] and rik$ptile[rik$Fish.Oil > 0]
## t = -2.2129, df = 54.212, p-value = 0.03113
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  -15.3903411  -0.7599052
## sample estimates:
## mean of x mean of y 
##  44.10345  52.17857

In other words, we can say with 95% confidence that the effect is real. The p-value, which is almost zero, is powerful evidence that whatever is going on is not due to chance.

What else might matter?

I tried my test on several other variables. For example, here’s how my scores look on days when I’ve had a glass of red wine or beer:

Again, statistically you can see the difference when we do the T-Test:

## 
##  Welch Two Sample t-test
## 
## data:  rik$ptile[rik$Alcohol == 0] and rik$ptile[rik$Alcohol > 0]
## t = 0.3831, df = 69.495, p-value = 0.7028
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  -5.827448  8.598423
## sample estimates:
## mean of x mean of y 
##  50.22222  48.83673

The p-value is much higher – so high in fact that we can assume that alcohol really has no effect.

How about Vitamin D? Here’s the result:

## 
##  Welch Two Sample t-test
## 
## data:  rik$ptile[rik$Vitamin.D == 0] and rik$ptile[rik$Vitamin.D > 0]
## t = -1.134, df = 41.954, p-value = 0.2632
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  -11.408090   3.199757
## sample estimates:
## mean of x mean of y 
##  46.33333  50.43750

Perhaps a tiny effect, but that high p-value says it’s pretty unlikely. Since I usually (but not always) take Vitamin D on the same mornings I take fish oil, I’m pretty sure this is just an artifact of the data.

Fading Effects

How long does the fish oil effect last? I test about 24 hours after taking two pills, but will the effect remain a day or two later?

If the fish pills help, then I’d expect the improvement to fade a bit each day after I stop taking them. And that’s exactly what I get:

See how my “Fish Oil” scores decline as I get further from the day I took the pills? If the effect were truly random, you wouldn’t expect such a constant slope in the graph. It really does seem like something is going on here.

Sleep doesn’t matter

Okay, perhaps I’ve convinced you that fish oil helps improve the scores on this test. But we all know that good sleep is perhaps the single most important factor in how well we feel. Maybe the fish oil just helps ensure a good night’s rest?

Nope. I track my sleep very carefully, using a Zeo headband that can tell precisely when I fell asleep, and whether or how long I might have been awake in the middle of the night.

Surprisingly, sleep seems to make no significant difference in my test scores. Here’s a graph, with blue dots showing my scores on days when I slept less than my average, and red dots when I slept more. See any patterns? (I can’t either)

## 
##  Welch Two Sample t-test
## 
## data:  rikN$ptile[rikN$Z <= meanZ] and rikN$ptile[rikN$Z > meanZ]
## t = 0.6063, df = 80.505, p-value = 0.546
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  -4.799337  9.005546
## sample estimates:
## mean of x mean of y 
##  50.51220  48.40909

Again, the high p-value, plus the similarity between the two means is pretty good evidence that sleep has little to do with my BRT.

Conclusions

This is not the first claim that’s been made about the relationship between food and BRT. Seth Roberts noted that he scored higher after eating butter, and Alex Chernavsky showed that BRT is affected by caffeine.

I’d need a double-blind study, perhaps conducted with dozens or hundreds of individuals to “prove” these results scientifically, but that’s not the point of this test. I found something that apparently works consistently for me, and it lets me easily test many other types of variables. By looking at other outliers in my BRT results, I hope to find other foods and activities that can make me smarter too.