Can someone give the counter argument to my initial cynical read of this? That read being: OpenAI has more money than it can invest productively within it's own company and is trying to cast a net to find new product ideas via an incubator?
I can't imagine Softbank or Microsoft is happy about their money being funneled into something like this and it implies they have run out of ideas internally. But I think I'm probably being too reflexively cynical
I think that MIT study of 95% of internal AI projects failing has scared off a lot of corporations from risking time in it. I think they also see they are hitting a limit of profitable intelligence from their services. (with the growth in inelegance the past 6–8 months being more realistic, not the unbelievable like in the past few years)
I think everyone is starting to see this as a middle man problem to solve, look at ERP systems for instance when they popped up it had some growing pains as an industry. (or even early windows/microsoft 'developers, developers, developers' target audience)
I OpenAI see it will take a lot of third party devs to take what OpenAI has and run with it. So they want to build a good developer and start up network to make sure that there are a good, solid ecosystem of options corporations and people can use AI wise.
The MIT study as released also does not really provide any support for the 95% failure rate claim. Until we have more details, we really don't know where that number came from:
Yea from what I understand 'Chats' and AI coding are something they already have market domination/are a leader on and are a good/okay product. It's the other use cases they haven't delievered on in terms of other companies using them as a platform to deliver AI apps, which I would imagine would have been a huge vertical in their pitches to investors and internal plans.
These third-party apps get huge token usage with agenentic patterns. So losing out on them and being forced to make more internal products to tune to specific use cases is not something they want to biuld out or explore
AI coding is mid(okay) yes, my main point is people use it and it's a good line of business right now for them. They expected bigger break throughs like gpt-2 to 3 to 4, and that's not happening so they have to lean on the other aspects of the business more.
The fact it is mid is why they are really needing all the other lines of business to work. AKA selling tokens to AI apps the specialize in other mid products, and limit the snakeoil AI products that are littering the market ruining AI's image of being the new catch all solution.
I was a big user of IntelliSense and more heavily, IntelliJ, for most of my career. It truly seemed like magic back then. I recall telling a colleague who preferred Emacs that it felt like having an editor that could read your mind, and would joke that my tab key was getting worn out.
Then I discovered LLMs.
If you think IntelliSense is comparable to what LLMs can do, you really, really need to try giving an AI higher-level problems to solve. Throwaway example I gave in a similar thread a few weeks ago: https://news.ycombinator.com/item?id=44892576
I think a big part of simonw's shtick is trying to get people to give LLMs a proper try, and TBH that's what I end up doing a lot too, including right now! The problem is a "proper try" takes dedicated effort, because it's not obvious where the AI will excel or fail for your specific context, and people legitimately don't have enough time for that.
But once you figure it out, it feels like when you first discovered IntelliSense, except you already know IntelliSense, so it's like... IntelliSense raised to the power of IntelliSense.
The things is that languages that need intellisense that much are language that made it too easy to construct complex systems. For lisp and C, you can get autocompletion for free, and indexing to offer docs preview and signature can be done quite easily as well. There's also an incentive to keep things short and small.
Then you have Java and C# where you need a whole IDE if you're writing more than 10 lines. Because using anything brings the whole jungle with it.
Hmm, I think all languages, regardless of verbosity, could be better with IntelliSense. I mean, if the IDE can reliably predict what you intend to type based on the context, regardless of the complexity of the application involved, why not have it?
Seems like languages like Java and C# that encourage more complexity just aim to provide richer context to mine. Simple example, given an incomplete line like "TypeA foo = bar.", the IDE can very easily figure out you want "bar.getBlah(baz)" because getBlah has a return type of "TypeA" and "baz" is the only variable available in the scope. But to have all that context at that point requires a whole bunch of setup beforehand, like a fine-grained types supported by a rich type system and function signatures and so on, which incentivizes verbosity that usually scales with the complexity of the app.
So yes, that's a lot of verbosity, but also a lot of context. To your point, I feel like the philosophy of languages like Java and C# is deliberately based on providing enough context for sophisticated tooling like IntelliSense and IntelliJ.
Unfortunately, the languages came before such sophisticated tooling existed, and when good tools did exist they were expensive, and even with those tools now being widely and freely availble, many people still don't use them. (Plus, in retrospect, the language designs themselves genuinely turned out to be more complex than ideal in some aspects.)
So the current reputation of these languages encouraging undue complexity is probably due to their philosophies being grounded in sound reasoning but based on predictions that didn't quite pan out as expected.
The thing is we did have nice tooling before those languages came to be. If you look at Smalltalk, it has this type of context in an even more powerful way. You can browse the whole library in a few click and view its code. And it has a Playground element where you can try and design stuff. And everything was inspectable.
Same with Lisp. If you take emacs has an example, you have instant documentation on every functions.
Another example can be python where there’s an help system embedded into the language.
Java is basically unwritable without a full indexer and completion. But it has a lot of guardrails and its verbosity discourages deviation.
And today we have Swift and kotlin which is barely better. They do a lot of magic behind the scene to reduce verbosity, but you’re still reliant on the indexer which is now coupled with a compiler for the magic stuff.
Better languages insists on documentation, contextual help, shorter programs, no magic unless created by the programmer, and visibility (inspection with a debugger and traceability with the system source available, if possible).
I think it’s more like Open AI has the name to throw around and a lot of credibility but not products that are profitable. They are burning cash and need to show a curve that they can reach profitability. Getting 15 people with 15 ideas they can throw their weight behind is worth a lot
Yeah, more or less. Being in the application space as well as the inference space hedges a variety of risks, that inference margins will squeeze, that competition will continue to increase, etc etc.
Yea and if you look at all of the job openings they have right now, they are mostly in the “applied AI” space which is a very different thing from what they have been doing altogether. This is mostly generic enterprise development which is how they will try to become profitable
Without putting my weight behind them, here's some counterarguments:
- OpenAI needs talent, and it's generally hard to find. Money will buy you smart PhDs who want to be on the conveyer belt, but not people who want to be a centre of a project of their own. This at least puts them in the orbit of OpenAI - some will fly away, some will set up something to be aquihired, some will just give up and try to join OpenAI anyway
- the amount of cash they will put into this is likely minuscule compared to their mammoth raises. It doesn't fundamentally change their funding needs
- OpenAI's biggest danger is that someone out there finds a better way to do AI. Right now they have a moat made of cash - to replicate them, you generally need a lot of hardware and cash for the electricity bill. Remember the blind panic when DeepSeek came out? So, anything they can do to stop that sprouting elsewhere is worth the money. Sprouting within OpenAI would be a nice-to-have.
Thanks! I think these are strong points, especially about the reaction to deepseek. I did have an assumption I didn't put in my original message, that they would probably be making investment offers to founders who walked into this with something like deepseek and that would balloon the costs well beyond office space and engineer time. But even having advanced knowledge of a next big idea from this would be worth the cost of entry yep.
Softbank or Microsoft can’t be happy or sad. CEOs only care about the share price going up while they’re holding the wheel. If Sam wants to start the idea incubator, why would they want to shut it down?
My thinking was that both of these large investors specifically want openAI to produce something like agi or failing that, something so popular and useful they make enough money not to care. And they want results this year/early next year. Softbank's latest investment round is partially tied up in openAI resolving their non-profit status by the end of this year. Training random founding engineers with no expectations of even using GPT-5 instead of traditional hiring feels either like a lack of focus or niave during this critical juncture.
But having said that, I do see the wisdom in the comments that the costs in running a 5 week course/workshop are low and the value in having a view into what people are making outside of the openAI bubble is a decent return all its own.
I don't think it's about money, they don't invest anything. They gather data about "technical talent" working on AI related ideas. They will connect with 15 of these people to see if they can build it together.
It seems almost like... an internship program for would-be AI founders?
My guess is this is as much about talent acquisition as it is about talent retention. Give the bored, overpaid top talent outside problems to mentor for/collaborate on that will still have strong ties to OpenAI, so they don't have the urge to just quit and start such companies on their own.
Almost every parent comment on this is negative. Why is there such an anti-OpenAI bias on a forum run by YCombinator, basically the pseudo-parent of OpenAI?
It seems that there is a constant motive to view any decision made by any big AI company on this forum at best with extreme cynicism and at worse virulent hatred. It seems unwise for a forum focused on technology and building the future to be so opposed to the companies doing the most to advance the most rapidly evolving technological domain at the moment.
The number one worst thing they've done was when Sam tried to get the US government to regulate AI so only a handful of companies could pursue research. They wanted to protect their moat.
What's even scarier is that if they actually had the direct line of sight to AGI that they had claimed, it would have resulted in many businesses and lines of work immediately being replaced by OpenAI. They knew this and they wanted it anyway.
Thank god they failed. Our legislators had enough of a moment of clarity to take the wait and see approach.
First, when they thought they had a big lead, OpenAI argued for AI regulations (targeting regulatory capture).
Then, when lead evaporated by Anthropic and others, OpenAI argued against AI regulations (so that they can catch up, and presumably argue for regulations again).
Most regulations that have been suggested would but restrictions mostly the largest, most powerful models, so they would likely affect OpenAI/Anthropic/Google primarily before smaller upstarts would be affected.
I think you can both think there's a need for some regulation and also want to avoid regulation that effectively locks out competition. When only one company is pushing for regulation, it's a good bet that they see this as a competitive advantage.
Dude, they completely betrayed everything in their "mission". The irony in the name OpenAI for a closed, scammy, for profit company can not be lost on you.
They released a near-SOTA open-source model recently.
Their prerogative is to make money via closed-source offerings so they can afford safety work and their open-source offerings. Ilya noted this near the beginning of the company. A company can't muster the capital needed to make SOTA models giving away everything for free when their competitor is Google, a huge for-profit company.
As per your claim that they are scammy, what about them is scammy?
Their contribution to opensouurce and open research is far behind other organisations like Meta and Mistral, as welcome as their recent model release is. Former security researchers like Jan Leike commonly cite a lack of organisational focus on security as a reason for leaving.
Not sure specifically what the commenter is referring to re: scammy, but things like the Scarlett Johansson / Her voice imitation and copyright infringement come to mind for me.
Oh yeah, that reminds me. the company did research on how to train a model that manipulates the metrics, allowing them to tick the open source box with a seemingly good score, while releasing something that serves no real purpose. [1] [2]
GPT-OSS is not a near-state-of-the-art model: it is a model deliberately trained in a way that it appears great in the evaluations, but is unusable and far underperforms actual open source models like Ollama. That's scammy.
That explains why gpt-oss wasn't working anywhere near as well for me as other similarly and smaller sized models. gemma3 27b, 12b, and phi4 (14b?) all significantly outperformed it when transforming unstructured data to structured data.
I'd expect to see a balance though, at least on the notion that people would be attracted to posting on a YC forum over other forums due to them supporting or having an interest in YC.
> posting on a YC forum over other forums due to them supporting or having an interest in YC.
I've been posting here for over a decade, and I have absolutely no interest in YC in any way, other than a general strong negative sentiment towards the entire VC industry YC included.
Lots of people come here for the forum, and leave the relationship with YC there.
Why do you assume there would be a balance? Maybe YC's reputation has just been going downhill for years. Also, OpenAI isn't part of YC. Sam Altman was fired from YC and it's pretty obvious what he learned from that was to cheat harder, not change his behavior.
The story I heard before the PR spin that came from Paul Graham later (where he tweeted that he never fired him and asked him to choose between YC and OpenAI) was that he was asked to resign. I don't have an official source, I heard this from multiple YC alumni. I don't know exactly what happened but based on what I've heard and actually having interacted with Sam Altman, it seems most likely to me he was asked to resign (which isn't technically being fired) because he does weird stuff. He claimed to be a chairman of YC which wasn't true, he barred other YC partners from running personal funds while he did it himself, and then all the further similar behaviors we've seen play out at OpenAI. Maybe you're right, but it seems to me he was "fired" and later there was some PR to smooth it over.
It's Saturday morning for California, where YC is centered. Everyone here should be out doing anything else (including me). It's not a random sampling of HN commenters, but a certain subset. I think we've just found out which way the subset that comments on Saturday mornings leans.
Well, in a way they are endorsed. They actively censor things they don’t like. Since there’s no moderation log, nobody prevents them from removing things just because they don’t like them.
When dealing with organizations that hold a disproportionate amount of power over your life, it's essential to view them in a somewhat cynical light.
This is true for governments, corporations, unions, and even non-profits. Large organizations, even well-intentioned ones, are "slow AI"[1]. They don't care about you as an individual, and if you don't treat everything they do and say with a healthy amount of skepticism and mistrust, they will trample all over you.
It's not that being openly hostile towards OpenAI on a message board will change their behavior. Only Slow AI can defeat other Slow AI. But it's our collective duty to at least voice our disapproval when a company behaves unethically or builds problematic technology.
I personally enjoy using LLMs. I'm a pretty heavy user of both ChatGPT and Claude, especially for augmenting web search and writing code. But I also believe building these tools was an act of enclosure of the commons at an unprecedented scale, for which LLM vendors must be punished. I believe LLMs are a risk to people who are not properly trained in how to make the best use of them.
It's possible to hold both these ideas in your head at the same time: LLMs are useful, but the organizations building them must be reined in before they cause irreparable damage to society.
My takeaway is actually the opposite, major props to YC for allowing this free speech unfettered - I cant think of any other organization or country on the planet where such a free setup exists
Unfettered? Have you ever seen how many posts disappear from being flagged for the most dubious reasons imaginable? Have you been on other sites on the internet? Hell, Reddit is more unfettered and that’s terrible.
I don't want to be glib - but perhaps it is because our "context window lengths" extend back a bit further than yours?
Big tech (not just AI companies) have been viewed with some degree of suspicion ever since Google's mantra of "Don't be evil" became a meme over a decade ago.
Regardless of where you stand on the concept of copyright law, it is an indisputable fact that in order for these companies to get to where they are today - they deliberately HOOVERED up terabytes of copyrighted materials without the consent or even knowledge of the original authors.
These guys are pursuing what they believe to be the biggest prize ever in the history of capitalism. Given that, viewing their decisions as a cynic, by default, seems like a rational place to start.
I would call it skepticism, not cynicism. And there is a long list of reasons that big tech and big AI companies are met with skepticism when they trot out nice sounding ideas that require everyone to just trust in their sincerity despite prior evidence.
I’ll bite, but not in the way you’re expecting. I’ll turn the question back on you and ask why you think they need defending?
Their messaging is just more drivel in a long line of corporate drivel, puffing themselves up to their investors, because that’s who their customers are first and foremost.
I’d do some self reflection and ask yourself why you need to carry water for them.
I support them because I like their products and find the work they've done interesting, and whether good or bad, extremely impactful and worth at least a neutral consideration.
I don't do a calculation in my head over whether any firm or individual I support "needs" my support before providing or rescinding it.
Perhaps the people you see as cynical have more research and/or experience behind their views on OpenAI than you. Many of us have been more naive in the past, including specifically towards Altman, Microsoft, and OpenAI.
This. I’ve been on HN for a while. I am barely hanging on to this community. It is near constant negativity and the questioning of every potential motive.
Sadly, yes, a lot of people want to be entrepreneurs for prestige/wealth. In their imagination they skip ahead to a fantastical ending: being rich and respected.
I find this disturbing. How can someone be useful to others without an idea of what that even means? How can one provide a novel offering without even caring about it? It's an expression of missing craft and bad taste. These aspirations are reactive, not generated by something beautiful (like kindness, or optimism).
Fortunately it is not hopeless; aspiring entrepreneurs can find deeper motivation if they look for it.
(I like to give the following advice: it is easier to first be useful to others and become rich than it is to be rich and then become useful to others. This almost certainly requires sufficient empathy and care to have a hypothesis and be "post-idea".)
Hey, "missing craft and bad taste?" Perhaps this hiring technique actually makes sense for OpenAI.
From my firsthand observations of the startup world, there are already plenty of pre-idea rich guys having expensive "conferences" where they talk about nothing and feel very good about themselves because of it. That OpenAI feels the need to write a blog about their shiny new cohort of useless trust fund boys is peculiar, but plenty of companies do this sort of thing.
I cannot imagine not having far more ideas than I could possibly ever do. Today I was describing one to my partner and she told me the only reason I shouldn't do it is that I have too many other things to do.
The thing that makes me continually have ideas is the same thing that makes me not want to dedicate my life to implementing just one of them. It would be like picking a favourite child if I were producing offspring like a queen bee.
I think there is value in the effort to develop something and frequently implementing something well is worth as much and sometimes much more than just a simple proof of concept. Someone has to build the things, It should be the people who are good at that and feel rewarded by a job done well more than a job done differently.
I do think that there isn't enough perspective of the lives that other people lead that can cause odd side-effects. Some people keep their ideas secret, or overvalue the idea because it was the one they had. This is a perspective I find hard to relate to. Most of the creative people I know are much happier someone knowing about their creations. They're like grains of sand, each one with their own details and can be evaluated many different ways. A lot of intellectual property feels like watching a man jealously protecting their grain of sand while standing on a beach.
I believe that is why the intent of things like copyright is to not protect ideas themselves. You cannot copyright an idea, and as an ideas person (a rather horrid term) that feels appropriate. The thing that you have built around the idea is the valuable thing you have contributed to the world. I think that is why items that are copyrightable are referred to as work. The value you bring comes from the from the work you did, not the idea you had, ideas just come to you (often at inconvenient times).
Mass media causes a bit of an aberration because of this. The thing that makes someone wealthy from a popular work is not proportional to the work done to produce it or even the quality of the work. Works that can be easily reproduced and distributed receive a disproportionate reward to their quality. A median quality work in many fields can receive next to no reward. The most popular works receive a masssive reward. The mechanism allowing a control of supply to provide reward for work ends up influencing a supply demand curve that gives massive rewards to a very few and very little to the majority. There is still an element of merit to the successes, the popular things are popular for a reason, some of those things really are the best. The question is would they have still been the best if everyone who worked to create stuff were rewarded more linearly to quality, would that support enough development of ability and opportunity that the pool from which the best can be selected becomes much larger.
[this might have gone off topic, but obviously my brain has things that have to come out]
> Thank you for your application. We will contact a select group of applicants in the coming weeks. If you are not contacted, we’d love to have you apply for the next cohort.
They can't even be bothered to ask ChatGPT to send a "no" email. Incredible.
Yeah, my thoughts where along the same line. Seems like they want to be another Ycombinator but more focused on AI. (Although TBF, I guess AI would also get the most traction at Ycombinator these days, given the hype wave).
The really odd thing was when he got fired for like 3 days in 2023 because he refused to let Y Combinator have preferential representation of its startups in OpenAI models.
We don't invest in ideas, we invest in founders. That's why OpenAI partnered with Y Combinator to bring you investments at the pre-founder stage.
We'll invest in your baby even before it's born! Simply accept our $10,000 now, and we'll own 30% of what your child makes in its lifetime. The womb is a hostile environment where the fetus needs to fight for survival, and a baby that actually manages to be born has the kind of can-do attitude and fierce determination and grit we're looking for in a founder.
Feels like the next logical move to me: they need to build and grow the demand for their product and API.
What better than companies whose central purpose is putting their API to use creatively? Rather than just waiting and hoping every F500 can implement AI improvements that aren't cut during budget crunches.
...no one thinks it's weird for the supposedly most transformational digital technology ever invented to need manufactured demand?? None of us think it's strange that a startup currently vying for a half a trillion dollar valuation is looking to "pre-idea founders" to help them find PMF??
Would this have been viewed with skepticism if any other startup from like 5+ years ago selling an API did this? If so, then how is it not even worse when a startup that is supposed to be providing access to what is pushed as a technical marvel of a panacea or something does it?
Sometimes I feel like I'm taking crazy pills...
I literally help companies implement AI systems. So I'm not denying there being any value...just...I don't understand how we can say with a straight face that they need to "build and grow demand for their product and API" while the same company was just reported on inking a $300B deal with Oracle for infra...like come on...the demand isn't there yet?!
There’s a difference between having product ideas rooted in compelling hypotheses on the one hand, and random ideas you throw against a wall and see what sticks.
I suspect, but could be wrong, that in OpenAI’s case it is because they believed they will reach AGI imminently and then “all problems are solved”, in other words the ultimate product. However, since that isn’t going to happen, they now have to think of more concrete products that are hard to copy and that people are willing to pay for.
If you are pre-idea today, does OpenAI believe your startup will still be relevant in the face of the AGI progress they forecast to make in the time it takes you to ship?
I ask questions like that in my head all the time. My metric is once their AI is smart enough to make their website not throw up an error half the time, I'll have to more deeply consider any AGI claims
In 10 years, people will apply for jobs for their children before conception, and wisely not have kids if they can’t line one up (at least as a backup.)
To me, it sounded like, "let's find all the idea guys who can't afford a tech founder. Then we'll see which ones have the best ideas, and move forward with those. As a bonus, we'll know exactly where we'd be able to acquihire a product manager for it!"
I'm highly capable of building some great things, but at my dayjob I'm filled to brim with things to do and a non-ending list of tasks in front of me.
I've built cool stuff before, and if given a little push and some support could probably come up with something useful - and I can implement much of it myself.
Put me in the room with cool people, throw out some conversation starters, shake it up and I'll come up with something.
This smacks so much of a Silicon Valley episode. “Pre-idea individuals” … Sounds like they want people with no opinions. Next we will say stuff like “No thought personas”
The country selection menu seems to include countries from around the world. It sounds like only the first and last weeks are actually on-site, the rest in async/remote.
Looks like they want to build up and support middle men to do the apps more than them, and act more like a platform or operating system position. Which makes sense giant corporations reporting 95% failed AI projects and the core success cases are specialist companies tuning the platform to a specific problem are successful. Then there are a ton of snake oil AI apps that are over promising under delivering hurting the image of AI's usefulness
This is probably purely a pivot in market strategy to profitability to increase token usage, increase consumer/public's trust more than farming ideas for internal projects.
It's clearly a talent grab. Where talent = creativity.
Most will submit the app with a dime a dozen ideas. (Or, at internet scale, a dime a few hundred thousand I guess?) No need to even consider those guys.
But it will be a pyramid. There will likely be 20-30 submissions that are at once, truly novel, and "why didn't I think of that!"-type ideas.
Finally, a handful of the submissions will be groundbreaking.
Et voilá. Right there you've identified the guys and gals thinking outside the LLM box about LLMs. Or even AI in general.
There are serious problems if they are lacking ideas while employing some of the supposedly best talent in the industry. Once your idea is out of the bag there is no way for you to control what happens with it.
The internet and many adjacent technologies were all created and iterated on inside the DoD and other wings of government research.
The world really benefits from well funded institutions doing research and development. Medicine has also largely advanced due in part to this.
What’s lost is the recapture. I don’t think governments are typically the best candidate to bring a new technology to marketable applications, but I do think they should be able to force terms of licensure and royalties. Keeping both those costs predictable and flat across industry would drive even more innovation to market.
What happens instead is private entities take public research and capture it almost entirely in as few hands as possible.
In short, the loss of civic pride and shared responsibility to society has created the nickel and dime you to death capitalism we are seeing in the rise today. Externalization of all costs possible and capture as much profit as possible. No thought to second order effects or how the system that is being dodged to contribute back to gave way for the ability for people to so grossly take advantage of it in the first place
> The internet and many adjacent technologies were all created and iterated on inside the DoD and other wings of government research.
^ This is the secret sauce. For decades the arrangement was exactly that: defense projects would create new technologies, then once those were finished, they were handed to private industry to figure out how to make a $20,000 MIL-spec LCD screen cheap enough and in vast enough quantities that you can buy 3 of them for less than $1,000 while the manufacturer, distributor, and retailer make a solid profit each. That's not an easy thing to do and it's what corporations have historically been good at. And it makes things better for the defense industry too, because they can then apply those lessons to their own hardware where appropriate. Win/win.
But we don't fund research anymore, or at least not that sort of it. Or perhaps there's just not much else to find. I think it's a bit of both. But in any case nothing new is getting made which is why technology feels so dull right now. The most innovative products right now are just thinner, dumber, lighter versions of things we already have, and that's not nothing but it isn't very interesting either.
Labor, FOSS... can you not imagine anything besides wealthy people creating artificial scarcity to force others to work for them?
Edit: if you don't think this is true, look at the history of truly any country and see what happens when subsistence farmers and indigenous communities refuse to work for capitalists
Labor, FOSS, can you be more specific? All FOSS projects operate within capitalism. Do you think Linux would be as successful as it is without the UNIX root, created by Bell Labs, a capitalism darling, or substantial contribution from companies like Intel?
Think of all the people who solved problems before/outside of typical capitalism. I guess more of those people wouldn't hurt to have right now to counter-balance the shift to hyper-capitalism that is ongoing.
Can someone give the counter argument to my initial cynical read of this? That read being: OpenAI has more money than it can invest productively within it's own company and is trying to cast a net to find new product ideas via an incubator? I can't imagine Softbank or Microsoft is happy about their money being funneled into something like this and it implies they have run out of ideas internally. But I think I'm probably being too reflexively cynical
I think that MIT study of 95% of internal AI projects failing has scared off a lot of corporations from risking time in it. I think they also see they are hitting a limit of profitable intelligence from their services. (with the growth in inelegance the past 6–8 months being more realistic, not the unbelievable like in the past few years)
I think everyone is starting to see this as a middle man problem to solve, look at ERP systems for instance when they popped up it had some growing pains as an industry. (or even early windows/microsoft 'developers, developers, developers' target audience)
I OpenAI see it will take a lot of third party devs to take what OpenAI has and run with it. So they want to build a good developer and start up network to make sure that there are a good, solid ecosystem of options corporations and people can use AI wise.
The MIT study found 90% of workers were regularly using LLMs.
The gap was that workers were using their own implementation instead of the company's implementation.
The MIT study as released also does not really provide any support for the 95% failure rate claim. Until we have more details, we really don't know where that number came from:
https://www.linkedin.com/feed/update/urn:li:activity:7365026...
Yea from what I understand 'Chats' and AI coding are something they already have market domination/are a leader on and are a good/okay product. It's the other use cases they haven't delievered on in terms of other companies using them as a platform to deliver AI apps, which I would imagine would have been a huge vertical in their pitches to investors and internal plans.
These third-party apps get huge token usage with agenentic patterns. So losing out on them and being forced to make more internal products to tune to specific use cases is not something they want to biuld out or explore
[flagged]
AI coding is mid(okay) yes, my main point is people use it and it's a good line of business right now for them. They expected bigger break throughs like gpt-2 to 3 to 4, and that's not happening so they have to lean on the other aspects of the business more.
The fact it is mid is why they are really needing all the other lines of business to work. AKA selling tokens to AI apps the specialize in other mid products, and limit the snakeoil AI products that are littering the market ruining AI's image of being the new catch all solution.
I was a big user of IntelliSense and more heavily, IntelliJ, for most of my career. It truly seemed like magic back then. I recall telling a colleague who preferred Emacs that it felt like having an editor that could read your mind, and would joke that my tab key was getting worn out.
Then I discovered LLMs.
If you think IntelliSense is comparable to what LLMs can do, you really, really need to try giving an AI higher-level problems to solve. Throwaway example I gave in a similar thread a few weeks ago: https://news.ycombinator.com/item?id=44892576
I think a big part of simonw's shtick is trying to get people to give LLMs a proper try, and TBH that's what I end up doing a lot too, including right now! The problem is a "proper try" takes dedicated effort, because it's not obvious where the AI will excel or fail for your specific context, and people legitimately don't have enough time for that.
But once you figure it out, it feels like when you first discovered IntelliSense, except you already know IntelliSense, so it's like... IntelliSense raised to the power of IntelliSense.
The things is that languages that need intellisense that much are language that made it too easy to construct complex systems. For lisp and C, you can get autocompletion for free, and indexing to offer docs preview and signature can be done quite easily as well. There's also an incentive to keep things short and small.
Then you have Java and C# where you need a whole IDE if you're writing more than 10 lines. Because using anything brings the whole jungle with it.
Hmm, I think all languages, regardless of verbosity, could be better with IntelliSense. I mean, if the IDE can reliably predict what you intend to type based on the context, regardless of the complexity of the application involved, why not have it?
Seems like languages like Java and C# that encourage more complexity just aim to provide richer context to mine. Simple example, given an incomplete line like "TypeA foo = bar.", the IDE can very easily figure out you want "bar.getBlah(baz)" because getBlah has a return type of "TypeA" and "baz" is the only variable available in the scope. But to have all that context at that point requires a whole bunch of setup beforehand, like a fine-grained types supported by a rich type system and function signatures and so on, which incentivizes verbosity that usually scales with the complexity of the app.
So yes, that's a lot of verbosity, but also a lot of context. To your point, I feel like the philosophy of languages like Java and C# is deliberately based on providing enough context for sophisticated tooling like IntelliSense and IntelliJ.
Unfortunately, the languages came before such sophisticated tooling existed, and when good tools did exist they were expensive, and even with those tools now being widely and freely availble, many people still don't use them. (Plus, in retrospect, the language designs themselves genuinely turned out to be more complex than ideal in some aspects.)
So the current reputation of these languages encouraging undue complexity is probably due to their philosophies being grounded in sound reasoning but based on predictions that didn't quite pan out as expected.
The thing is we did have nice tooling before those languages came to be. If you look at Smalltalk, it has this type of context in an even more powerful way. You can browse the whole library in a few click and view its code. And it has a Playground element where you can try and design stuff. And everything was inspectable.
Same with Lisp. If you take emacs has an example, you have instant documentation on every functions. Another example can be python where there’s an help system embedded into the language.
Java is basically unwritable without a full indexer and completion. But it has a lot of guardrails and its verbosity discourages deviation.
And today we have Swift and kotlin which is barely better. They do a lot of magic behind the scene to reduce verbosity, but you’re still reliant on the indexer which is now coupled with a compiler for the magic stuff.
Better languages insists on documentation, contextual help, shorter programs, no magic unless created by the programmer, and visibility (inspection with a debugger and traceability with the system source available, if possible).
I think it’s more like Open AI has the name to throw around and a lot of credibility but not products that are profitable. They are burning cash and need to show a curve that they can reach profitability. Getting 15 people with 15 ideas they can throw their weight behind is worth a lot
Yeah, more or less. Being in the application space as well as the inference space hedges a variety of risks, that inference margins will squeeze, that competition will continue to increase, etc etc.
Yea and if you look at all of the job openings they have right now, they are mostly in the “applied AI” space which is a very different thing from what they have been doing altogether. This is mostly generic enterprise development which is how they will try to become profitable
Without putting my weight behind them, here's some counterarguments:
- OpenAI needs talent, and it's generally hard to find. Money will buy you smart PhDs who want to be on the conveyer belt, but not people who want to be a centre of a project of their own. This at least puts them in the orbit of OpenAI - some will fly away, some will set up something to be aquihired, some will just give up and try to join OpenAI anyway
- the amount of cash they will put into this is likely minuscule compared to their mammoth raises. It doesn't fundamentally change their funding needs
- OpenAI's biggest danger is that someone out there finds a better way to do AI. Right now they have a moat made of cash - to replicate them, you generally need a lot of hardware and cash for the electricity bill. Remember the blind panic when DeepSeek came out? So, anything they can do to stop that sprouting elsewhere is worth the money. Sprouting within OpenAI would be a nice-to-have.
Thanks! I think these are strong points, especially about the reaction to deepseek. I did have an assumption I didn't put in my original message, that they would probably be making investment offers to founders who walked into this with something like deepseek and that would balloon the costs well beyond office space and engineer time. But even having advanced knowledge of a next big idea from this would be worth the cost of entry yep.
Softbank or Microsoft can’t be happy or sad. CEOs only care about the share price going up while they’re holding the wheel. If Sam wants to start the idea incubator, why would they want to shut it down?
My thinking was that both of these large investors specifically want openAI to produce something like agi or failing that, something so popular and useful they make enough money not to care. And they want results this year/early next year. Softbank's latest investment round is partially tied up in openAI resolving their non-profit status by the end of this year. Training random founding engineers with no expectations of even using GPT-5 instead of traditional hiring feels either like a lack of focus or niave during this critical juncture.
But having said that, I do see the wisdom in the comments that the costs in running a 5 week course/workshop are low and the value in having a view into what people are making outside of the openAI bubble is a decent return all its own.
I don't think it's about money, they don't invest anything. They gather data about "technical talent" working on AI related ideas. They will connect with 15 of these people to see if they can build it together.
It seems almost like... an internship program for would-be AI founders?
My guess is this is as much about talent acquisition as it is about talent retention. Give the bored, overpaid top talent outside problems to mentor for/collaborate on that will still have strong ties to OpenAI, so they don't have the urge to just quit and start such companies on their own.
It's possible that a single senior employee just wanted to do this and it doesn't cost that much and their manager was like "sure"
I really do want this to be the case
OpenAI definitely doesn’t have more money than it can invest. They burn cash like crazy that’s why they keep raising money every 6 months.
> I can't imagine Softbank or Microsoft is happy about their money being funneled into something like this
Imagining one negative spin doesn’t an imagination make. Imagine harder.
> OpenAI has more money than it can invest productively
I don't think there is any money given, except travel costs for first and last week.
I mean, how much money are they throwing at this? I doubt it approaches anything close to a percent of the cash they have on hand.
Almost every parent comment on this is negative. Why is there such an anti-OpenAI bias on a forum run by YCombinator, basically the pseudo-parent of OpenAI?
It seems that there is a constant motive to view any decision made by any big AI company on this forum at best with extreme cynicism and at worse virulent hatred. It seems unwise for a forum focused on technology and building the future to be so opposed to the companies doing the most to advance the most rapidly evolving technological domain at the moment.
People remember things and consistently behaving like an asshole gets you treated like an asshole.
OpenAI had a lot of goodwill and the leadership set fire to it in exchange for money. That's how we got to this state of affairs.
What are the worst things OpenAI has done
The number one worst thing they've done was when Sam tried to get the US government to regulate AI so only a handful of companies could pursue research. They wanted to protect their moat.
What's even scarier is that if they actually had the direct line of sight to AGI that they had claimed, it would have resulted in many businesses and lines of work immediately being replaced by OpenAI. They knew this and they wanted it anyway.
Thank god they failed. Our legislators had enough of a moment of clarity to take the wait and see approach.
It's actually worse than that.
First, when they thought they had a big lead, OpenAI argued for AI regulations (targeting regulatory capture).
Then, when lead evaporated by Anthropic and others, OpenAI argued against AI regulations (so that they can catch up, and presumably argue for regulations again).
Do you believe AI should not be regulated?
Most regulations that have been suggested would but restrictions mostly the largest, most powerful models, so they would likely affect OpenAI/Anthropic/Google primarily before smaller upstarts would be affected.
I think you can both think there's a need for some regulation and also want to avoid regulation that effectively locks out competition. When only one company is pushing for regulation, it's a good bet that they see this as a competitive advantage.
Dude, they completely betrayed everything in their "mission". The irony in the name OpenAI for a closed, scammy, for profit company can not be lost on you.
They released a near-SOTA open-source model recently.
Their prerogative is to make money via closed-source offerings so they can afford safety work and their open-source offerings. Ilya noted this near the beginning of the company. A company can't muster the capital needed to make SOTA models giving away everything for free when their competitor is Google, a huge for-profit company.
As per your claim that they are scammy, what about them is scammy?
Their contribution to opensouurce and open research is far behind other organisations like Meta and Mistral, as welcome as their recent model release is. Former security researchers like Jan Leike commonly cite a lack of organisational focus on security as a reason for leaving.
Not sure specifically what the commenter is referring to re: scammy, but things like the Scarlett Johansson / Her voice imitation and copyright infringement come to mind for me.
Oh yeah, that reminds me. the company did research on how to train a model that manipulates the metrics, allowing them to tick the open source box with a seemingly good score, while releasing something that serves no real purpose. [1] [2]
GPT-OSS is not a near-state-of-the-art model: it is a model deliberately trained in a way that it appears great in the evaluations, but is unusable and far underperforms actual open source models like Ollama. That's scammy.
[1] https://www.lesswrong.com/posts/pLC3bx77AckafHdkq/gpt-oss-is...
[2] https://huggingface.co/openai/gpt-oss-20b/discussions/14
That explains why gpt-oss wasn't working anywhere near as well for me as other similarly and smaller sized models. gemma3 27b, 12b, and phi4 (14b?) all significantly outperformed it when transforming unstructured data to structured data.
> Why is there such an anti-OpenAI bias on a forum run by YCombinator, basically the pseudo-parent of OpenAI?
Isnt that a good thing? The comments here are not sponsored, nor endorsed by YC.
I'd expect to see a balance though, at least on the notion that people would be attracted to posting on a YC forum over other forums due to them supporting or having an interest in YC.
I think the majority of people don't care about YC. It just happens to be the most popular tech forum.
> posting on a YC forum over other forums due to them supporting or having an interest in YC.
I've been posting here for over a decade, and I have absolutely no interest in YC in any way, other than a general strong negative sentiment towards the entire VC industry YC included.
Lots of people come here for the forum, and leave the relationship with YC there.
Why do you assume there would be a balance? Maybe YC's reputation has just been going downhill for years. Also, OpenAI isn't part of YC. Sam Altman was fired from YC and it's pretty obvious what he learned from that was to cheat harder, not change his behavior.
Sam Altman wasn't fired from YC.
The story I heard before the PR spin that came from Paul Graham later (where he tweeted that he never fired him and asked him to choose between YC and OpenAI) was that he was asked to resign. I don't have an official source, I heard this from multiple YC alumni. I don't know exactly what happened but based on what I've heard and actually having interacted with Sam Altman, it seems most likely to me he was asked to resign (which isn't technically being fired) because he does weird stuff. He claimed to be a chairman of YC which wasn't true, he barred other YC partners from running personal funds while he did it himself, and then all the further similar behaviors we've seen play out at OpenAI. Maybe you're right, but it seems to me he was "fired" and later there was some PR to smooth it over.
https://archive.is/Vl3VR
https://archive.is/2mzD7
You don't know what exactly happened, but you stated confidently what happened.
That doesn't address the substance of the claim though. What do you know that you aren't telling us about that situation?
You're right about that and that's why I'm providing additional context.
It's Saturday morning for California, where YC is centered. Everyone here should be out doing anything else (including me). It's not a random sampling of HN commenters, but a certain subset. I think we've just found out which way the subset that comments on Saturday mornings leans.
Well, in a way they are endorsed. They actively censor things they don’t like. Since there’s no moderation log, nobody prevents them from removing things just because they don’t like them.
When dealing with organizations that hold a disproportionate amount of power over your life, it's essential to view them in a somewhat cynical light.
This is true for governments, corporations, unions, and even non-profits. Large organizations, even well-intentioned ones, are "slow AI"[1]. They don't care about you as an individual, and if you don't treat everything they do and say with a healthy amount of skepticism and mistrust, they will trample all over you.
It's not that being openly hostile towards OpenAI on a message board will change their behavior. Only Slow AI can defeat other Slow AI. But it's our collective duty to at least voice our disapproval when a company behaves unethically or builds problematic technology.
I personally enjoy using LLMs. I'm a pretty heavy user of both ChatGPT and Claude, especially for augmenting web search and writing code. But I also believe building these tools was an act of enclosure of the commons at an unprecedented scale, for which LLM vendors must be punished. I believe LLMs are a risk to people who are not properly trained in how to make the best use of them.
It's possible to hold both these ideas in your head at the same time: LLMs are useful, but the organizations building them must be reined in before they cause irreparable damage to society.
[1]: https://www.antipope.org/charlie/blog-static/2018/01/dude-yo...
Why do you assume that a forum run by X needs to or should support X? And why is it unwise - from what metrics do you measure wisdom?
My takeaway is actually the opposite, major props to YC for allowing this free speech unfettered - I cant think of any other organization or country on the planet where such a free setup exists
Unfettered? Have you ever seen how many posts disappear from being flagged for the most dubious reasons imaginable? Have you been on other sites on the internet? Hell, Reddit is more unfettered and that’s terrible.
I don't want to be glib - but perhaps it is because our "context window lengths" extend back a bit further than yours?
Big tech (not just AI companies) have been viewed with some degree of suspicion ever since Google's mantra of "Don't be evil" became a meme over a decade ago.
Regardless of where you stand on the concept of copyright law, it is an indisputable fact that in order for these companies to get to where they are today - they deliberately HOOVERED up terabytes of copyrighted materials without the consent or even knowledge of the original authors.
These guys are pursuing what they believe to be the biggest prize ever in the history of capitalism. Given that, viewing their decisions as a cynic, by default, seems like a rational place to start.
True, though it seems most people on HN think AGI is impossible thus would consider OpenAI's quest a lost cause.
I don’t think one can validly draw any such conclusion.
When you call yourself "Open"AI and then turn around and backstab the entire open community, its pretty hard to recover from that.
They undermined their not-for-profit mission by changing their governance structure. This changed their very DNA.
They released a near-SOTA open source model not too long ago
they didn't release the source to it.
open weights != open source
because of the repeated rugpulling?
> Why is there such an anti-OpenAI bias on a forum run by YCombinator, basically the pseudo-parent of OpenAI?
Because our views are our own and not reflective of the feelings of the company that hosts the forum?
I would call it skepticism, not cynicism. And there is a long list of reasons that big tech and big AI companies are met with skepticism when they trot out nice sounding ideas that require everyone to just trust in their sincerity despite prior evidence.
I’ll bite, but not in the way you’re expecting. I’ll turn the question back on you and ask why you think they need defending?
Their messaging is just more drivel in a long line of corporate drivel, puffing themselves up to their investors, because that’s who their customers are first and foremost.
I’d do some self reflection and ask yourself why you need to carry water for them.
I support them because I like their products and find the work they've done interesting, and whether good or bad, extremely impactful and worth at least a neutral consideration.
I don't do a calculation in my head over whether any firm or individual I support "needs" my support before providing or rescinding it.
Perhaps the people you see as cynical have more research and/or experience behind their views on OpenAI than you. Many of us have been more naive in the past, including specifically towards Altman, Microsoft, and OpenAI.
Microsoft, especially, has a long history of malfeasance.
People here are directly in the line of fire for their jobs. It’s not surprising.
True, but there are many reasons besides. Meta and Anthropic attract less criticism for a reason.
This. I’ve been on HN for a while. I am barely hanging on to this community. It is near constant negativity and the questioning of every potential motive.
Skepticism is healthy. Cynicism is exhausting.
Thank you for posting this.
In the current echo chamber and unprecedented hype, I'll take cynicism over hollow positivity and sycophancy
> pre-idea individuals
First time I am hearing this term. It is a euphemism like pre-owned cars (instead of used cars).
What does this mean? People who do not yet have any idea? Weird.
YC tried this at some point. Just hire white guys that seem "bright" on paper but can't come up with any idea whatsoever, see where it goes.
Spoiler: it didn't go anywhere. The story on HN is still here:
https://news.ycombinator.com/item?id=3700712
but the link is 404
https://www.ycombinator.com/noidea.html
Sadly, yes, a lot of people want to be entrepreneurs for prestige/wealth. In their imagination they skip ahead to a fantastical ending: being rich and respected.
I find this disturbing. How can someone be useful to others without an idea of what that even means? How can one provide a novel offering without even caring about it? It's an expression of missing craft and bad taste. These aspirations are reactive, not generated by something beautiful (like kindness, or optimism).
Fortunately it is not hopeless; aspiring entrepreneurs can find deeper motivation if they look for it.
(I like to give the following advice: it is easier to first be useful to others and become rich than it is to be rich and then become useful to others. This almost certainly requires sufficient empathy and care to have a hypothesis and be "post-idea".)
Hey, "missing craft and bad taste?" Perhaps this hiring technique actually makes sense for OpenAI.
From my firsthand observations of the startup world, there are already plenty of pre-idea rich guys having expensive "conferences" where they talk about nothing and feel very good about themselves because of it. That OpenAI feels the need to write a blog about their shiny new cohort of useless trust fund boys is peculiar, but plenty of companies do this sort of thing.
Entrepreneurship is the act of creation, a noble activity
The irony of the term entrepreneur is anyone who calls themselves an entrepreneur isn't, and the ones that are, don't.
more of risk taking
Drop the "Ideas." Just "Guy." It's cleaner.
you can buy ideas the same way you can buy expensive cars and bags. Centuries ago some rich europeans used to do that. Then we discovered 'merit'.
I cannot imagine not having far more ideas than I could possibly ever do. Today I was describing one to my partner and she told me the only reason I shouldn't do it is that I have too many other things to do.
The thing that makes me continually have ideas is the same thing that makes me not want to dedicate my life to implementing just one of them. It would be like picking a favourite child if I were producing offspring like a queen bee.
I think there is value in the effort to develop something and frequently implementing something well is worth as much and sometimes much more than just a simple proof of concept. Someone has to build the things, It should be the people who are good at that and feel rewarded by a job done well more than a job done differently.
I do think that there isn't enough perspective of the lives that other people lead that can cause odd side-effects. Some people keep their ideas secret, or overvalue the idea because it was the one they had. This is a perspective I find hard to relate to. Most of the creative people I know are much happier someone knowing about their creations. They're like grains of sand, each one with their own details and can be evaluated many different ways. A lot of intellectual property feels like watching a man jealously protecting their grain of sand while standing on a beach.
I believe that is why the intent of things like copyright is to not protect ideas themselves. You cannot copyright an idea, and as an ideas person (a rather horrid term) that feels appropriate. The thing that you have built around the idea is the valuable thing you have contributed to the world. I think that is why items that are copyrightable are referred to as work. The value you bring comes from the from the work you did, not the idea you had, ideas just come to you (often at inconvenient times).
Mass media causes a bit of an aberration because of this. The thing that makes someone wealthy from a popular work is not proportional to the work done to produce it or even the quality of the work. Works that can be easily reproduced and distributed receive a disproportionate reward to their quality. A median quality work in many fields can receive next to no reward. The most popular works receive a masssive reward. The mechanism allowing a control of supply to provide reward for work ends up influencing a supply demand curve that gives massive rewards to a very few and very little to the majority. There is still an element of merit to the successes, the popular things are popular for a reason, some of those things really are the best. The question is would they have still been the best if everyone who worked to create stuff were rewarded more linearly to quality, would that support enough development of ability and opportunity that the pool from which the best can be selected becomes much larger.
[this might have gone off topic, but obviously my brain has things that have to come out]
Tried a mock application. Got this at the end:
> Thank you for your application. We will contact a select group of applicants in the coming weeks. If you are not contacted, we’d love to have you apply for the next cohort.
They can't even be bothered to ask ChatGPT to send a "no" email. Incredible.
Sam clearly misses Y Combinator.
Yeah, my thoughts where along the same line. Seems like they want to be another Ycombinator but more focused on AI. (Although TBF, I guess AI would also get the most traction at Ycombinator these days, given the hype wave).
Did we ever find out why it is he doesn’t work there anymore?
Was forced to choose between OpenAI and YC by Paul Graham and Jessica. Sama chose OpenAI.
https://x.com/paulg/status/1796107666265108940
The really odd thing was when he got fired for like 3 days in 2023 because he refused to let Y Combinator have preferential representation of its startups in OpenAI models.
Clearly, dealing with OpenAI doesn't leave any room for fun stuff like YC. Just a hunch.
Indeed.
Exactly what I read between the lines on this.
I randomly saw him being announced as a big deal here on this forum years ago and I remember thinking what has this guy done to deserve this.
OpenAI appears to lack clear product vision.
This feels like a program to see what sticks.
"Pre-idea stage" support is wild to me
We don't invest in ideas, we invest in founders. That's why OpenAI partnered with Y Combinator to bring you investments at the pre-founder stage.
We'll invest in your baby even before it's born! Simply accept our $10,000 now, and we'll own 30% of what your child makes in its lifetime. The womb is a hostile environment where the fetus needs to fight for survival, and a baby that actually manages to be born has the kind of can-do attitude and fierce determination and grit we're looking for in a founder.
Can I bet on which sperm will reach the egg?
err, "invest".
Feels like the next logical move to me: they need to build and grow the demand for their product and API.
What better than companies whose central purpose is putting their API to use creatively? Rather than just waiting and hoping every F500 can implement AI improvements that aren't cut during budget crunches.
...no one thinks it's weird for the supposedly most transformational digital technology ever invented to need manufactured demand?? None of us think it's strange that a startup currently vying for a half a trillion dollar valuation is looking to "pre-idea founders" to help them find PMF??
Would this have been viewed with skepticism if any other startup from like 5+ years ago selling an API did this? If so, then how is it not even worse when a startup that is supposed to be providing access to what is pushed as a technical marvel of a panacea or something does it?
Sometimes I feel like I'm taking crazy pills...
I literally help companies implement AI systems. So I'm not denying there being any value...just...I don't understand how we can say with a straight face that they need to "build and grow demand for their product and API" while the same company was just reported on inking a $300B deal with Oracle for infra...like come on...the demand isn't there yet?!
This feels like a program to see what sticks.
Isn't that how we got (and eventually lost) most Google products?
There’s a difference between having product ideas rooted in compelling hypotheses on the one hand, and random ideas you throw against a wall and see what sticks.
I suspect, but could be wrong, that in OpenAI’s case it is because they believed they will reach AGI imminently and then “all problems are solved”, in other words the ultimate product. However, since that isn’t going to happen, they now have to think of more concrete products that are hard to copy and that people are willing to pay for.
Did anyone get confirmation that the form got sent? There is no feedback from pressing "submit" for me.
Same
Same issue
If you are pre-idea today, does OpenAI believe your startup will still be relevant in the face of the AGI progress they forecast to make in the time it takes you to ship?
I ask questions like that in my head all the time. My metric is once their AI is smart enough to make their website not throw up an error half the time, I'll have to more deeply consider any AGI claims
"pre-idea individuals"
Next up, we're funding prenatal individuals.
In 10 years, people will apply for jobs for their children before conception, and wisely not have kids if they can’t line one up (at least as a backup.)
Right, this corporate linkedin-lingo is getting worse by the day.
Sell your first born to Scam Altman now!
I think what they are trying to do is a type of forward deployed engineer, but with no employment.
More they train such engineers more profitable for them to spread the word.
In 1st cohort, they're probably going to accept extrovert people with active social presence.
15 ppl in first cohort? Aka dont bother applying.
It looks like application submission isn't functioning.
Yeah, clicking "Submit" doesn't do anything obvious, aside from post some arcane errors to the JavaScript Console.
lmao, was this vibe coded?
Do a hard refresh while console is open; that'd fix it!
Everyone at YC should be upset that sama continues to cannibalize the YC value proposition. First funding, then mindshare, and now this.
What exactly do I need to do to qualify.
I'm working on a prototype right now, guess I'll toss my hat in the ring.
Fortune favors the bold.
Not working form is a shame :)
Why not ask the big bag of words to generate "ideas"?
Just, Devil's Advocate..
but what, exactly, makes you believe this internship program is not an idea generated by the big bag of words?
"it offers pre-idea individuals" wtf
If ideas are a dime a dozen, what even is a pre-idea startup
Talented individual(s) who want to do a startup.
> "pre-idea individuals"
Move over "idea guys", it's the era of the "guy who hypothetically might have an idea at some point".
I've got concepts of an idea
I don't know man?
To me, it sounded like, "let's find all the idea guys who can't afford a tech founder. Then we'll see which ones have the best ideas, and move forward with those. As a bonus, we'll know exactly where we'd be able to acquihire a product manager for it!"
If OpenAI needs a bunch of PMs, they will increasingly be able to spin some up, not hire humans.
I caught that too. What's a "pre-idea" individual? Someone who... wants the vague _idea_ of a company?
No, before that
It's the AI guy version of the blockchain guy who had no idea what it was for or what to do with it, but was very hyped on it
I mean, I get it.
I'm highly capable of building some great things, but at my dayjob I'm filled to brim with things to do and a non-ending list of tasks in front of me.
I've built cool stuff before, and if given a little push and some support could probably come up with something useful - and I can implement much of it myself.
Put me in the room with cool people, throw out some conversation starters, shake it up and I'll come up with something.
South Park Commons -1 to 0 program seems conceptually similar
> pre-idea individuals
Holy crap, I thought that term existed purely in the realm of satire skits:
https://www.tiktok.com/@techroastshow/video/7341240131015445...
This smacks so much of a Silicon Valley episode. “Pre-idea individuals” … Sounds like they want people with no opinions. Next we will say stuff like “No thought personas”
I first misread it as "OpenAI Grave" where someone would put the list of all discontinued models.
The FAQ items don't expand for me, on Android Vivaldi.
Do you have to be in the US or can they help to get in?
The country selection menu seems to include countries from around the world. It sounds like only the first and last weeks are actually on-site, the rest in async/remote.
Whatever.
AWS gives startups money.
Looks like they want to build up and support middle men to do the apps more than them, and act more like a platform or operating system position. Which makes sense giant corporations reporting 95% failed AI projects and the core success cases are specialist companies tuning the platform to a specific problem are successful. Then there are a ton of snake oil AI apps that are over promising under delivering hurting the image of AI's usefulness
This is probably purely a pivot in market strategy to profitability to increase token usage, increase consumer/public's trust more than farming ideas for internal projects.
> act more like a platform
As of 19 hours into the post, this is the only comment that explains what's actually behind this sort of program.
Precursor thinking from Altman (mentions YC): https://stratechery.com/2025/an-interview-with-openai-ceo-sa...
This is how it begins. You make sure you're under the hood of everything. Everyone is "building on" you. You see all the action.
While this can be how it ends: https://techcrunch.com/2023/01/19/twitter-officially-bans-th...
But not always. For an example that ended differently, Amazon opened to third party sellers, on the side, earlier than people might remember, 1999: https://www.cbsnews.com/news/amazoncom-in-a-bazaar-move/
And how that went: https://theconversation.com/amazon-is-no-longer-a-retail-sit...
This is how you put Multivac to work, and profit.
This is like creating filters for Instagram but for AI. I am all for it. Let million flowers bloom.
Is it just me seeing this as a talent discovery program?
It's clearly a talent grab. Where talent = creativity.
Most will submit the app with a dime a dozen ideas. (Or, at internet scale, a dime a few hundred thousand I guess?) No need to even consider those guys.
But it will be a pyramid. There will likely be 20-30 submissions that are at once, truly novel, and "why didn't I think of that!"-type ideas.
Finally, a handful of the submissions will be groundbreaking.
Et voilá. Right there you've identified the guys and gals thinking outside the LLM box about LLMs. Or even AI in general.
hmm.. wonder what the most accurate Venn diagram for this is?
What would be nice is a "grove" I can flee to where I'd be immune to the effects of OpenAI and the other AI labs.
Alas, such grove is impossible.
There are serious problems if they are lacking ideas while employing some of the supposedly best talent in the industry. Once your idea is out of the bag there is no way for you to control what happens with it.
[flagged]
If capitalists can't solve problems, who do you suggest can?
The internet and many adjacent technologies were all created and iterated on inside the DoD and other wings of government research.
The world really benefits from well funded institutions doing research and development. Medicine has also largely advanced due in part to this.
What’s lost is the recapture. I don’t think governments are typically the best candidate to bring a new technology to marketable applications, but I do think they should be able to force terms of licensure and royalties. Keeping both those costs predictable and flat across industry would drive even more innovation to market.
What happens instead is private entities take public research and capture it almost entirely in as few hands as possible.
In short, the loss of civic pride and shared responsibility to society has created the nickel and dime you to death capitalism we are seeing in the rise today. Externalization of all costs possible and capture as much profit as possible. No thought to second order effects or how the system that is being dodged to contribute back to gave way for the ability for people to so grossly take advantage of it in the first place
> The internet and many adjacent technologies were all created and iterated on inside the DoD and other wings of government research.
^ This is the secret sauce. For decades the arrangement was exactly that: defense projects would create new technologies, then once those were finished, they were handed to private industry to figure out how to make a $20,000 MIL-spec LCD screen cheap enough and in vast enough quantities that you can buy 3 of them for less than $1,000 while the manufacturer, distributor, and retailer make a solid profit each. That's not an easy thing to do and it's what corporations have historically been good at. And it makes things better for the defense industry too, because they can then apply those lessons to their own hardware where appropriate. Win/win.
But we don't fund research anymore, or at least not that sort of it. Or perhaps there's just not much else to find. I think it's a bit of both. But in any case nothing new is getting made which is why technology feels so dull right now. The most innovative products right now are just thinner, dumber, lighter versions of things we already have, and that's not nothing but it isn't very interesting either.
Labor, FOSS... can you not imagine anything besides wealthy people creating artificial scarcity to force others to work for them?
Edit: if you don't think this is true, look at the history of truly any country and see what happens when subsistence farmers and indigenous communities refuse to work for capitalists
Labor, FOSS, can you be more specific? All FOSS projects operate within capitalism. Do you think Linux would be as successful as it is without the UNIX root, created by Bell Labs, a capitalism darling, or substantial contribution from companies like Intel?
Think of all the people who solved problems before/outside of typical capitalism. I guess more of those people wouldn't hurt to have right now to counter-balance the shift to hyper-capitalism that is ongoing.
Such as? Did any of those achievements lift billions of people out of poverty?
BRB, waiting for capitalists to solve the housing and healthcare crisis, shouldn't be long...
Capitalists would be over the moon if they could build more housing, I assure you.
I mean they already solved that, they're raking in even more billions. The only issue was their solution was for them, not us.
Incredible opportunity for SF Muni to get subsidized with even more full bus wrap ads for AI coding apps that nobody uses