A few days ago it was debunked that Devin demo was actually fake. Two reactions come to mind here:
Oh, no, let’s all climb into a bunker and wait things out because who knows exactly how much more of this AI stuff is fake.
Yeah, so what?
I am here to say that there should be a 3rd option.
Basically, yeah, anyone claiming that they already have AI engineer type capabilities is faking. We’re not there yet. Code is the least performing area of LLMs right now. And I am not even talking about “engineer” level capabilities - I am talking about “code snippets”. Code Snippets are extremely unreliable from ALL Large Language Models (no exception).
Having said that, the trend is clear: 5-10 years from now AI will write code for engineers.
And if 5-10 years from now AI will write code for engineers, then those companies are starting TODAY.
And hence my proposal for a 3rd option:
a) Focus on assessing technical fundamentals (founder-product fit, presence of technical advisors, expert assessments, access to proprietary codebases, etc). b) Pay attention to the overall vision beyond AI: what does the actual product look like? How is the team thinking about UI/UX, etc.
Anyone who has done any fine-tuning of LLMs on code would know just how nuanced this problem is. We’re still early on this path. But ignoring the trend would be like ignoring Amazon.com and Google in 1998 pre .com bust. Are there Pets.com - yes. Are there real diamonds being born right now. Absolutely!
And the comments keep on coming:
https://x.com/OngDevLab/status/1804560586010689836