NOAH
JACOBS

TABLE OF CONTENTS
2025.02.09-On-Overengineering
2025.02.02-On-Autocomplete
2025.01.26-On-The-Automated-Turkey-Problem
2025.01.19-On-Success-Metrics
2025.01.12-On-Being-the-Best
2025.01.05-On-2024
2024.12.29-On-Dragons-and-Lizards
2024.12.22-On-Being-a-Contrarian
2024.12.15-On-Sticky-Rules
2024.12.08-On-Scarcity-&-Abundance
2024.12.01-On-BirdDog
2024.11.24-On-Focus
2024.11.17-On-The-Curse-of-Dimensionality
2024.11.10-On-Skill-as-Efficiency
2024.11.03-On-Efficiency
2024.10.27-On-Binary-Goals
2024.10.20-On-Commitment
2024.10.13-On-Rules-Vs-Intuition
2024.10.06-On-Binding-Constraints
2024.09.29-On-Restrictive-Rules
2024.09.22-On-Conflicting-Ideas
2024.09.15-On-Vectors
2024.09.08-On-Perfection
2024.09.01-On-Signal-Density
2024.08.25-On-Yapping
2024.08.18-On-Wax-and-Feather-Assumptions
2024.08.11-On-Going-All-In
2024.08.04-On-Abstraction
2024.07.28-On-Naming-a-Company
2024.07.21-On-Coding-in-Tongues
2024.07.14-On-Sufficient-Precision
2024.07.07-On-Rewriting
2024.06.30-On-Hacker-Houses
2024.06.23-On-Knowledge-Graphs
2024.06.16-On-Authority-and-Responsibility
2024.06.09-On-Personal-Websites
2024.06.02-On-Reducing-Complexity
2024.05.26-On-Design-as-Information
2024.05.19-On-UI-UX
2024.05.12-On-Exponential-Learning
2024.05.05-On-School
2024.04.28-On-Product-Development
2024.04.21-On-Communication
2024.04.14-On-Money-Tree-Farming
2024.04.07-On-Capital-Allocation
2024.03.31-On-Optimization
2024.03.24-On-Habit-Trackers
2024.03.17-On-Push-Notifications
2024.03.10-On-Being-Yourself
2024.03.03-On-Biking
2024.02.25-On-Descoping-Uncertainty
2024.02.18-On-Surfing
2024.02.11-On-Risk-Takers
2024.02.04-On-San-Francisco
2024.01.28-On-Big-Numbers
2024.01.21-On-Envy
2024.01.14-On-Value-vs-Price
2024.01.07-On-Running
2023.12.31-On-Thriving-&-Proactivity
2023.12.24-On-Surviving-&-Reactivity
2023.12.17-On-Sacrifices
2023.12.10-On-Suffering
2023.12.03-On-Constraints
2023.11.26-On-Fear-Hope-&-Patience
2023.11.19-On-Being-Light
2023.11.12-On-Hard-work-vs-Entitlement
2023.11.05-On-Cognitive-Dissonance
2023.10.29-On-Poetry
2023.10.22-On-Gut-Instinct
2023.10.15-On-Optionality
2023.10.08-On-Walking
2023.10.01-On-Exceeding-Expectations
2023.09.24-On-Iterative-Hypothesis-Testing
2023.09.17-On-Knowledge-&-Understanding
2023.09.10-On-Selfishness
2023.09.03-On-Friendship
2023.08.27-On-Craftsmanship
2023.08.20-On-Discipline-&-Deep-Work
2023.08.13-On-Community-Building
2023.08.05-On-Decentralized-Bottom-Up-Leadership
2023.07.29-On-Frame-Breaks
2023.07.22-On-Shared-Struggle
2023.07.16-On-Self-Similarity
2023.07.05-On-Experts
2023.07.02-The-Beginning

WRITING

"if you have to wait for it to roar out of you, then wait patiently."

- Charles Bukowski

Writing is one of my oldest skills; I started when I was very young, and have not stopped since. 

Age 13-16 - My first recorded journal entry was at 13 | Continued journaling, on and off.

Ages 17-18 - Started writing a bit more poetry, influenced heavily by Charles Bukwoski | Shockingly, some of my rather lewd poetry was featured at a county wide youth arts type event | Self published my first poetry book .

Age 19 - Self published another poetry book | Self published a short story collection with a narrative woven through it | Wrote a novel in one month; after considerable edits, it was long listed for the DCI Novel Prize, although that’s not that big of a deal, I think that contest was discontinued.

Age 20 - Published the GameStop book I mention on the investing page | Self published an original poetry collection that was dynamically generated based on reader preferences | Also created a collection of public domain poems with some friend’s and I’s mixed in, was also going to publish it with the dynamic generation, but never did.

Age 21 - Started writing letters to our hedge fund investors, see investing.

Age 22 - Started a weekly personal blog | Letters to company Investors, unpublished. 

Age 23 - Coming up on one year anniversary of consecutive weekly blog publications  | Letters to investors, unpublished.

You can use the table of contents to the left or click here to check out my blog posts.

Last Updated 2024.06.10

Join my weekly blog to learn about learning

2025.02.02

LXXXV

Surprise surprise, AI cannot only make you smarter, it can also make you dumber.

Subscribe

-------------------

No More Autocomplete

I turned off the autocomplete function in my code editor this weekend. It was making me a worse programmer. 

An expert is good at looking at a problem in a domain, quickly coming up with a number of solutions, and selecting the most acceptable one based on the situation. 

When you use an AI tool at the start of a problem solving process, you are effectively starting with one or two solutions that the AI came up with for you. 

Now, your role is that of critic rather than creator. Now that you have an option in front of you, it is easier to analyze and edit that option than it is to create and evaluate other options. If that option is “good enough,” on average, you will likely go with it.

In this way, you will keep creating things that are “okay.” This will make you worse over time.

I am still using AI to help me, but I am being much more intentional about it. It is phenomenal leverage, but it is dangerous if you do not us it carefully. 

Experts

I’ve written about experts a number of times before. A working definition: 

An expert can efficiently and accurately identify and manage the risks of decisions in a given domain.

The relevant consequences of this are two fold: 

  1. Given a blank canvas, the expert can come up with a number of decisions and select the most appropriate.

  2. Given an existing solution, the expert can analyze the pros and the cons.

The second case is incredibly valuable if you are getting advice from an expert–you can see how they think in relation to the solution you came up with and benefit from their understanding of cause and effect. 

However, if you are functioning as the expert and creating something, you will get better results by frequenting the first case. That being said, it is cognitively more expensive to do so, and, if you are given an “okay” solution, even if you are functioning as an expert, there is a very real temptation to just go with it.

By the way, I am not an expert at coding. I am intermediate at best, novice at worst. That being said, this definition of expert above is aspirational and practical. Meaning, an expert ought to be efficient at thinking as outlined. But, if you are not yet efficient at thinking like that, you become efficient at it through repetition and trial and error.

You become an expert by thinking like one.

AI Autocomplete

With tools like Cursor and GitHub CoPilot, it is very common for programmers to have a visible, auto generated solution at all times (“the easy way out”).



Caption: The colored function is one we use in prod. The grayed out text is cursor’s autocomplete function. Right now, we have no use at all for a “safe_score_batch.” Go down a pic to see what happens when I accept the suggestion.



Caption: The recommendation for completing the function we don’t need is “okay” to “bad”—it gives the benefit of sharing a resource (that could potentially be problematic to share) but does nothing to complete the function in an actual “batch.”

This is like the text autocomplete on an iPhone keyboard or when you’re writing an email in gmail, except more extreme, because oftentimes, the coding autocomplete will provide whole functions for you. 

We can break the quality of the autocomplete suggestions into three categories: 

  • Case I: All of the autocomplete suggestion is as good or better than what you would write

  • Case II: Some of the autocomplete suggestion is as good or better than what you would write

  • Case III: None of the autocomplete suggestion is as good or better than what you would write

Cases I & III are super easy to deal with. In Case I, you accept it, and in Case III, you ignore it.  Of course, things aren’t quite so simple, because most of the time, the suggestion is Case II. 

So, in theory, what happens is that there is some sort of threshold for what percentage of the code needs to be as good or better than what you would write for you to accept the suggestion. Maybe if 51% of the code is as good as what you would write, you take it and then edit out the other 49%, whereas if 49% of the code is as good or better than what you would write, you ignore it completely.

In reality, such an autocomplete system causes two things to happen: 

  1. You are spending more time analyzing the code than you are creating it.

  2. Over time, you are tempted to think a higher percentage of the code is as good or better than yours

I know I’m talking about code here, but really, these same risks apply to anything in which you default to letting ChatGPT or another AI or AI Agent do for you.

Analysis Mode

If you are constantly given solutions to analyze, you will naturally spend time analyzing them. This takes effort and time away from your ability to create solutions. 

This may seem like a natural step towards a leadership or executive position–looking at GPT’s output becomes like analyzing an employee’s work. 

The problem with an autocomplete system like Cursor Tab is that you often times see a possible solution before you can even start coming up with options yourself. And, if even a quarter of that solution is good enough, you’re now fixing the other three quarters in your head rather than asking if it’s even a solution that really should be used.

In other words, the AI is in the driver’s seat and you are editing for it.

While this may seem like an issue confined to autocomplete tools, it is not. The same issue exists when you are using a chat tool with lazy prompting. If you just throw in some context and a problem, you are not doing the creative part of the problem solving and will be analyzing whatever solution it gives you.

It is very easy to fall into this trap, which is why it is so dangerous.

Lowering Quality

Humans tend to choose the path of least resistance. So, going back to our three autocomplete Cases, we have a pressure to want the suggestions to be closer to Case I, because it is easier.

So, if you are coding for 6 hours a day, and always see these auto suggestions, you might be inclined to start biasing towards the assumption that a greater percentage of the code is as good or better than what you’d write. After all, if that is the case, then there’s no point in you editing it! You can just go ahead and use it as is. 

In this way, it becomes convenient if the ai is better and you are worse. So, maybe we start to believe that this is the case.

And, because we believe it, maybe it starts to become true.

Subscribe

Working Solution

I am not eschewing AI tools altogether–I’m just being considerably more careful with them.

As mentioned, I’ve disabled the autocomplete functionality in my code editor. Now, when I need to write or edit a function, I am thinking about what I want it to do and how I want it to do it. Then I’ll potentially use AI tools to flesh out my solution with actual code.

The same sort of logic applies to anything outside of coding, too. If I’m going to use AI to help me complete some task, I’m making sure I take a stab at the creative part first (if there is one).

Live Deeply,