NOAH
JACOBS

TABLE OF CONTENTS
2025.02.09-On-Overengineering
2025.02.02-On-Autocomplete
2025.01.26-On-The-Automated-Turkey-Problem
2025.01.19-On-Success-Metrics
2025.01.12-On-Being-the-Best
2025.01.05-On-2024
2024.12.29-On-Dragons-and-Lizards
2024.12.22-On-Being-a-Contrarian
2024.12.15-On-Sticky-Rules
2024.12.08-On-Scarcity-&-Abundance
2024.12.01-On-BirdDog
2024.11.24-On-Focus
2024.11.17-On-The-Curse-of-Dimensionality
2024.11.10-On-Skill-as-Efficiency
2024.11.03-On-Efficiency
2024.10.27-On-Binary-Goals
2024.10.20-On-Commitment
2024.10.13-On-Rules-Vs-Intuition
2024.10.06-On-Binding-Constraints
2024.09.29-On-Restrictive-Rules
2024.09.22-On-Conflicting-Ideas
2024.09.15-On-Vectors
2024.09.08-On-Perfection
2024.09.01-On-Signal-Density
2024.08.25-On-Yapping
2024.08.18-On-Wax-and-Feather-Assumptions
2024.08.11-On-Going-All-In
2024.08.04-On-Abstraction
2024.07.28-On-Naming-a-Company
2024.07.21-On-Coding-in-Tongues
2024.07.14-On-Sufficient-Precision
2024.07.07-On-Rewriting
2024.06.30-On-Hacker-Houses
2024.06.23-On-Knowledge-Graphs
2024.06.16-On-Authority-and-Responsibility
2024.06.09-On-Personal-Websites
2024.06.02-On-Reducing-Complexity
2024.05.26-On-Design-as-Information
2024.05.19-On-UI-UX
2024.05.12-On-Exponential-Learning
2024.05.05-On-School
2024.04.28-On-Product-Development
2024.04.21-On-Communication
2024.04.14-On-Money-Tree-Farming
2024.04.07-On-Capital-Allocation
2024.03.31-On-Optimization
2024.03.24-On-Habit-Trackers
2024.03.17-On-Push-Notifications
2024.03.10-On-Being-Yourself
2024.03.03-On-Biking
2024.02.25-On-Descoping-Uncertainty
2024.02.18-On-Surfing
2024.02.11-On-Risk-Takers
2024.02.04-On-San-Francisco
2024.01.28-On-Big-Numbers
2024.01.21-On-Envy
2024.01.14-On-Value-vs-Price
2024.01.07-On-Running
2023.12.31-On-Thriving-&-Proactivity
2023.12.24-On-Surviving-&-Reactivity
2023.12.17-On-Sacrifices
2023.12.10-On-Suffering
2023.12.03-On-Constraints
2023.11.26-On-Fear-Hope-&-Patience
2023.11.19-On-Being-Light
2023.11.12-On-Hard-work-vs-Entitlement
2023.11.05-On-Cognitive-Dissonance
2023.10.29-On-Poetry
2023.10.22-On-Gut-Instinct
2023.10.15-On-Optionality
2023.10.08-On-Walking
2023.10.01-On-Exceeding-Expectations
2023.09.24-On-Iterative-Hypothesis-Testing
2023.09.17-On-Knowledge-&-Understanding
2023.09.10-On-Selfishness
2023.09.03-On-Friendship
2023.08.27-On-Craftsmanship
2023.08.20-On-Discipline-&-Deep-Work
2023.08.13-On-Community-Building
2023.08.05-On-Decentralized-Bottom-Up-Leadership
2023.07.29-On-Frame-Breaks
2023.07.22-On-Shared-Struggle
2023.07.16-On-Self-Similarity
2023.07.05-On-Experts
2023.07.02-The-Beginning

WRITING

"if you have to wait for it to roar out of you, then wait patiently."

- Charles Bukowski

Writing is one of my oldest skills; I started when I was very young, and have not stopped since. 

Age 13-16 - My first recorded journal entry was at 13 | Continued journaling, on and off.

Ages 17-18 - Started writing a bit more poetry, influenced heavily by Charles Bukwoski | Shockingly, some of my rather lewd poetry was featured at a county wide youth arts type event | Self published my first poetry book .

Age 19 - Self published another poetry book | Self published a short story collection with a narrative woven through it | Wrote a novel in one month; after considerable edits, it was long listed for the DCI Novel Prize, although that’s not that big of a deal, I think that contest was discontinued.

Age 20 - Published the GameStop book I mention on the investing page | Self published an original poetry collection that was dynamically generated based on reader preferences | Also created a collection of public domain poems with some friend’s and I’s mixed in, was also going to publish it with the dynamic generation, but never did.

Age 21 - Started writing letters to our hedge fund investors, see investing.

Age 22 - Started a weekly personal blog | Letters to company Investors, unpublished. 

Age 23 - Coming up on one year anniversary of consecutive weekly blog publications  | Letters to investors, unpublished.

You can use the table of contents to the left or click here to check out my blog posts.

Last Updated 2024.06.10

Join my weekly blog to learn about learning

On Reducing Complexity

One simple rule that applies to sales, coding, and evil rabbits.

XLIX

2024.06.02

What if I could give you one decision making principle that, if you used, would give you whatever you wanted in life? That’d be some pretty powerful information, wouldn’t it?

Well, I can’t do that, but we can certainly talk about what kind of information is useful.

Subscribe

-------------------

Strategies for Profit

Information is a hard thing to define. It’s physical, yet it feels so abstract. It’s everywhere, but we only pay attention to a few subsets of it. Still, a simplified, working definition that captures all of the nuance I want to communicate in this post is:

Information is a strategy for interacting with the world that, on average, gives the user an intended consequence.

Easy enough. Information is a blueprint. We read the blueprint and follow it to build the outcome that we want.

If you want to go to jail, you’d start breaking laws. You wouldn’t get caught every time, but, you usually would. If you want to get a job in investment banking, you’d go to a target school and join the prestigious clubs and get the internships at the good banks. Usually, that would wok.

Simple enough. But, perhaps more useful than these two specific example is information that generalizes to different domains. We’ll just call them heuristics, here. Some map or blue print for action that be used to get “better results” across multiple domains.

So, a very similar heuristic, a number of different ways:

  1. Occam’s Razor - Entities should not be multiplied beyond necessity.

  2. K.I.S.S. - Keep it Simple, Stupid

  3. Reduce Complexity - No elaboration needed

Here, the intended consequence is not explicitly spelled out. That’s perhaps where it can get tricky… what if the intended consequence you’re going for not in alignment with the heuristic’s intended consequence? What even are “better” results?

Well, we’ll just explore one implementation of each of these heuristics below to work through that.

Occam’s Killer Rabbits

I generally associate Occam’s Razor with evaluating the probability of whether or not something is true. How do you assess the probability of two competing explanations? As an example, if I’m driving and see roadkill, I might come up with the two competing possibilities:

  1. The poor animal was attempting to cross the road and got hit by a vehicle.

  2. The animal was a rabid fiend that had slain three local villagers before a noble warrior finally detained it. Subsequently, the Chief held council and it was determined that the foul creature’s fate was for it to be beaten to death and smashed and crushed before finally be deposited near the road as a warning to other wandering rabbits that may perform similar transgressions.

While number two is quite the compelling narrative, operating with Occam’s Razor will result in the selection of number one. The “Entities,” here taken loosely, needed for one to be true are significantly less expansive than the “entities” needed for explanation two to be true.

And, really, this is intuitive, I hope; all of the claims in number two were added with no evidence whatsoever. It increases complexity to no end and with no support.

In this case, the intended consequence of the principle might be having a world view more aligned with reality.

KISS’ing One Proposal

I made a mistake while selling this week. A potential client, we’ll call him Todd, just finished the pro bono fine tuning phase that Ultima does with clients. It was time for me to send him a proposal for a paid pilot.

One thing he asked in our call, though, was about prospect lists. Sometimes, a client will give us a list of accounts they want us to focus on; other times, we’ll have to make the list of accounts ourselves. When we do the latter, we charge a little bit more for it.



Caption: Brilliant, albeit unrelated implementation of “Reducing Complexity”: marry gravity with one water source to nourish plenty of plants. Found at an MMA gym, in between Harvard and MIT.

With Todd in particular, his product can only be sold to a very specific set of sub customers. He already has quite a comprehensive list of these customers. We found some samples of similar customers for him on our own during the evaluation, but we also found a lot of dissimilar customers as well.

So, when Todd and I were discussing next steps, he asked me if he was supposed to provide the prospect list going forward or if we did. Really, it was obvious that he wanted to give us his account list and just use that, but I took too long elaborating about the option of us finding a list for him. Then, after the call, I sent him TWO proposals: one with the prospect list creation service in it, and one without it.

Amateur hour—I already knew what he wanted, and I presented him with the option to take a confusing up charge on something that I know he doesn’t want. Of course, we could set it up in such a way where customers get a discount by providing a prospect list, which might be even turn the extra complexity into a “win” for the customer, but that’s not how I framed it.

I still feel good about the probability of closing the deal, but, if I would have remembered KISS, I’d feel great about it; I would’ve confirmed that he’s comfortable sharing an account list on our call and then sent him one simple proposal. Instead, I created more mental work for a potential customer and everyone in his org who he now has to sell it to.

So, in short, the intended consequence of leveraging KISS here is maximizing the probability of closing a deal.

Reducing API Calls

Recently, I saw a solution for a programming task in which given an article, the code is supposed to extract information about the people in the article if they are related to a certain organization.

One proposed solution involved seven prompts and api calls using a complex wrapper to either a Claude 3 or GPT model. The solution promised to find a bunch of extra info about these people. But, quite frequently, it failed to even pull the right quotes for many of the people in the test cases.

So, going back to the drawing board, using the heuristic “Reduce Complexity,” and leveraging the help of someone 10x smarter than myself who uses that heuristic even better than I do, I saw how using one prompt rather than seven, adding in no unnecessary context, and using just a few powerful python libraries resulted in roughly the same results as the complex solution. It took maybe 20x fewer lines of code and was designed in an hour rather than weeks.

And, while it’s not perfect, the complexity is so much lower that rapid iteration on the solution is now possible; there is so little code that it’s pretty easy to find out where it actually goes wrong. Adding in a lil logic and maybe another LLM prompt should have it routinely outperforming the original solution.

And, the best part is, even if you swap the GPT calls we used in the new experiment with a Phi-3 medium model* and run it on my home computer, it’s surprisingly close to be as useful as the original seven step method. I haven’t tested it yet, but I’m speculating that perhaps a 20B parameter model will be sufficient and a 70B parameter model might be just enough over kill for the task to dependably get the right answer.

Many lines of code rooted in complex theory are useless; just like heuristics are useful in so far as they enable optimal action with as little overhead as possible, code is useful in so far as it gives the intended results. And, in this case, our heuristic, Reduce Complexity, had the intended result of making us code that does what it’s supposed to do… hm, it seems code and heuristics are awfully similar… more on that another time…

*Speaking of reducing complexity, the Phi-3 family of models are based on the notion that training a model with the language of a 4 year old is sufficient for surprisingly many tasks!

Meta Heuristics

At the start of this, I promised you one simple rule that applied to sales, coding, and evil rabbits. Yet, I gave you three heuristics. Well, Occam’s Razor, KISS, and Reducing Complexity can really all be used interchangeably in any of the above examples. Pick your poison; adherence to all of them generally results in the same outcome.

And again, what is that outcome? In all three of these examples, I’d say it’s pretty objectively “better results,” whether that be a more accurate representation of the world, a higher probability of closing a sale, or code that actually works. The finesse with leveraging heuristics may be related to how well you can actually define what a “better outcome” in a particular case, but that’s a discussion for another time.

For now, just have your mind be blown with me: you could really use one sentence (any one of our three heuristics) to drive decisions made in three different domains: understanding the source of road kill, closing deals, and playing with AI.

These heuristic things are powerful.

One other question maybe on your mind—what if you and I implement the same heuristics differently in the same situations? Well, my friend, both of us will be judged by the world and openly graded on the results we get. If you are repeatedly using a heuristic and getting poor results, but see other people getting good results with the same thing, I would suggest a Wiccan Curse, user error, or some counter balancing pressure coming from the intentional or unintentional implementation of another heuristic.

Subscribe

-------------------

Information is a real thing. I don’t know how to gauge it’s power, but it certainly has something to do with how often it can get you the outcome that you want.

A parting quote from Naval:

“The only true test of intelligence is if you get what you wanted out of life.”

  -Naval Ravikant

The same can be said about the test of the value of information.

Live Deeply,