Introduction
Several years ago, I realized that advancements in life sciences had reached a point where I could leverage this knowledge to significantly increase my chances of extending my lifespan. This epiphany pushed me to take action and explore this possibility further.
Soon after, I understood that if I were to succeed in this endeavor, careful tracking of my progress and the effectiveness of various interventions would be crucial. This led me to dive into the biology of aging and the quantified-self movement.
It’s now been about two years since I began collecting data seriously, and I’ve refined my analytical tools enough to start extracting valuable insights. As I embarked on this analysis and documented my findings, I realised that with just a bit more effort, I could share these insights with others. Thus, this blog was born. Beyond disseminating my discoveries, my primary goal is to connect with others who are on similar paths, to share experiences, tools, and possibly even to pool data to hasten the generation of insights. I remain open-minded about where this journey will take me.
In this blog, I plan to cover the following:
- The correlations I uncover in my data, particularly those related to the effectiveness of the interventions I’m testing, and insights that may benefit others pursuing similar goals.
- Announcements of self-experiments, along with explanations of the rationale behind them, followed by a discussion of the results.
- Reviews of the hardware, software, and diagnostic tools I use, and a look into the system I’ve developed to automate data collection and analysis. I hope my experiences will be helpful to others, or even inspire further innovation.
- Occasionally, I’ll highlight major developments in aging science and self-tracking technology that catch my attention.
To Science or Not to Science
A disclaimer: my professional background is in a computational discipline focusing on the analysis of biological signals, but my work does not directly intersect with aging or self-tracking research. Nonetheless, I aim to apply my skills to the issues discussed above with as much rigor as is feasible under the constrains of N=1 experiments.
That said, it’s crucial to acknowledge that the type of data I’m working with — essentially N=1 experiment fraught with the subjective pitfalls of self-experimentation — do not meet the standards of scientific rigor. Hence any findings that I report in this blog shouldn’t be viewed as definitive evidence but rather as hints that some relationships might exist, though they may only apply to me.
Despite these limitations, I find my efforts valuable — at least on a personal level — and I’d like to explain why. The primary challenge of working with such limited, uncontrolled data is the risk of identifying false positives — detecting relationships that don’t actually exist — unless strict significance criteria are applied. However, within the context of my personal journey, I view this differently than I would as a scientist. I accept the possibility of false positives, which means I may occasionally act on interventions that appear effective but aren’t. I don’t see this as a major issue as long as most interventions I identify as beneficial genuinely are. My real concern would be continuing interventions that are harmful, but that seems unlikely since an intervention negatively affecting my health would probably not show up as strongly positive in my analyses. In short, I’m comfortable with identifying a mix of truly and falsely beneficial interventions, provided that none of the false positives are actually harmful. In upcoming posts, I’ll explain my approach in more detail and how I aim to achieve this balance.
For those interested in a more detailed statistical explanation (feel free to skip ahead if not 😊): I’m currently tracking over 300 variables about myself (more on that in the next post). This leads to over 45,000 possible relationships to explore. However, with a maximum of around 1,000 readings for some of the variables I track daily, applying a strict Bonferroni correction would make it almost impossible to detect any meaningful relationships, regardless of their strength or validity. Therefore, I won’t adhere to such stringent criteria for significance. Instead, I’ll focus on relatively strong correlations (for instance, p<0.001), ignore those with p-values barely below 0.05, and conduct additional ad-hoc tests to increase my confidence in the validity of these relationships. I believe this approach will ensure that most of the relationships I identify as beneficial will truly be positive, while the chance of identifying harmful relationships as positive will be extremely low. Additionally, I’ll periodically reassess the interventions as more data accumulates to catch any potential false positives.
Leave a comment
Your email address will not be published. Required fields are marked *