My friends @robwalling and @einarvollset just launched TinySeed, an accelerator for software companies where a successful outcome is a healthy, sustainable business rather than attempting to ride the rocketship trajectory. https://t.co/LSHqZNcVid

I have some thoughts:

As somebody who bootstrapped ~4 companies, I feel like I had to make some clearly suboptimal decisions early in them for lack of what is, in hindsight, not all that much money. But there's a huge gap in the product space for investment options.
It's weird: you can get $25k from Amex trivially, and angels are very willing to write a check for that much, but you have to make representations about your goals/ambitions/market/etc which don't really apply to everyone.
And so you see the traditional angel/VC ecosystem fund companies where honestly the returns are probably not there, and this is knowable pretty early, but the chase of them will wreck what could have been a perfectly happy business.
(To make the math work for traditional VCs the company has to at least have a market-appropriate shot of $100 million a year. There are a lot more $10 million a year companies than $100 million a year companies. That is *not* a bad terminal outcome for founders/employees.)
I'm glad that there is some experimentation in this space, and know of at least 3 teams in the MicroConf community which are doing takes on it. It's a natural evolution for entrepreneurs after doing the bootstrap-from-nothing thing ~5 times.
You get basically 5~10 shots at building a company in your life, and a lot of my peers are discovering simultaneously "Hmm about half of my shots are spent and I want to be really selective about what I do next but oh goodness still want to be involved in ALL THE SOFTWARE."
And investing has historically been a natural next step for operators in e.g. Silicon Valley; you get to both enjoy vicariously the early days without having to be up at 2 AM anymore, and you get to give back to the community. But bootstrappers have different values/tolerances.
So I see it as a great thing that there is experimentation regarding the reinvestment and mentoring model in the bootstrapping community as well. And I can't think of anybody I'd trust more on this than Rob; he's the real deal.

More from Patrick McKenzie

More from All

@franciscodeasis https://t.co/OuQaBRFPu7
Unfortunately the "This work includes the identification of viral sequences in bat samples, and has resulted in the isolation of three bat SARS-related coronaviruses that are now used as reagents to test therapeutics and vaccines." were BEFORE the


chimeric infectious clone grants were there.https://t.co/DAArwFkz6v is in 2017, Rs4231.
https://t.co/UgXygDjYbW is in 2016, RsSHC014 and RsWIV16.
https://t.co/krO69CsJ94 is in 2013, RsWIV1. notice that this is before the beginning of the project

starting in 2016. Also remember that they told about only 3 isolates/live viruses. RsSHC014 is a live infectious clone that is just as alive as those other "Isolates".

P.D. somehow is able to use funds that he have yet recieved yet, and send results and sequences from late 2019 back in time into 2015,2013 and 2016!

https://t.co/4wC7k1Lh54 Ref 3: Why ALL your pangolin samples were PCR negative? to avoid deep sequencing and accidentally reveal Paguma Larvata and Oryctolagus Cuniculus?
How can we use language supervision to learn better visual representations for robotics?

Introducing Voltron: Language-Driven Representation Learning for Robotics!

Paper: https://t.co/gIsRPtSjKz
Models: https://t.co/NOB3cpATYG
Evaluation: https://t.co/aOzQu95J8z

🧵👇(1 / 12)


Videos of humans performing everyday tasks (Something-Something-v2, Ego4D) offer a rich and diverse resource for learning representations for robotic manipulation.

Yet, an underused part of these datasets are the rich, natural language annotations accompanying each video. (2/12)

The Voltron framework offers a simple way to use language supervision to shape representation learning, building off of prior work in representations for robotics like MVP (
https://t.co/Pb0mk9hb4i) and R3M (https://t.co/o2Fkc3fP0e).

The secret is *balance* (3/12)

Starting with a masked autoencoder over frames from these video clips, make a choice:

1) Condition on language and improve our ability to reconstruct the scene.

2) Generate language given the visual representation and improve our ability to describe what's happening. (4/12)

By trading off *conditioning* and *generation* we show that we can learn 1) better representations than prior methods, and 2) explicitly shape the balance of low and high-level features captured.

Why is the ability to shape this balance important? (5/12)

You May Also Like

I like this heuristic, and have a few which are similar in intent to it:


Hiring efficiency:

How long does it take, measured from initial expression of interest through offer of employment signed, for a typical candidate cold inbounding to the company?

What is the *theoretical minimum* for *any* candidate?

How long does it take, as a developer newly hired at the company:

* To get a fully credentialed machine issued to you
* To get a fully functional development environment on that machine which could push code to production immediately
* To solo ship one material quanta of work

How long does it take, from first idea floated to "It's on the Internet", to create a piece of marketing collateral.

(For bonus points: break down by ambitiousness / form factor.)

How many people have to say yes to do something which is clearly worth doing which costs $5,000 / $15,000 / $250,000 and has never been done before.