NOAH
JACOBS
If you are looking for my current venture, you can visit BirdDog's site or ping me directly at noah@getbirddog.ai.
I am a writer & founder, which means I'm also a programmer, salesman, product developer, and whatever else I need to be.
I publish a weely blog that focuses on learning & growing into the person you want to be. Subscribe below.
More broadly, this site is a record of some of the cool things I've done in the past, like starting and running a hedge fund.
Additionally, the site serves as an archive for my blog posts.
You can even intereact with those posts via a knowledge graph, available directly below. The graph helps you visualize the connection between seemingly disparate topics and involves running a language model in your browser.
GRAPH OF MY BLOG POSTS
For a detailed explanation of what's going on, scroll below the graph...
Explanation
One of the reasons I write my blog is to connect seemingly disparate ideas in such a fashion that encourages exponential learning in both myself and the reader. I think a natural way to visualize this is with a knowledge graph, seen below.
So, the above graph is a visualization of each of my blog posts (lil blue dots) and how they relate to "crafts" (bigger blue dots) and "abstractions" (green dots).
The connections were derived by a mix of exact keyword matches aswell as by asking an LLM if it thought the post was related to each craft and abstraction.
After clicking on a node, you can see a summary of the node as well as connected nodes that you can traverse to. Additionally, there is a search bar.
I would classify the search functionality as 'experimental'; when you click on it, the following happens:
-> The query is tokenized using a custom WordPiece tokenizer in Clojure, a natural choice due to the innate recursive nature of the language and it being what my site is written in.
-> The tokens are embedded in your browser using MiniLm via onnxruntime-web. Wild, right?
-> The titles of adjacent nodes are fetched, tokenized, and embedded in your browser.
-> Cosine similarity is run between the query and each of the node embeddings.
-> Given the most similar node, the content of that node and your query are passed into gpt 4o mini to answer your question.
If you can't tell, I'm fascinated by the capacity to run a model on your browser. I could pre embed the titles, but want to show that it really doesn't take that long in your browser.
As a note, when I was experimenting with the gpt wrapper, before I added in my blogs as context, I found out that gpt does have some latent knowledge about my blog, which was quite nice to see. That being said, this may confound some of the apparent info alpha from selecting the context in the way I have. I think it's fair, though, to have the same thing that made some of the connections evaluate them.
Below is a list of some of the things that made it into the current iteration, followed by a list of deadends:
Current Iteration