The primary audience here isn't average people, or even engineers at the company. The primary audience is investors - the idea is to trick idiots into dumping their money into this overvalued company rather than the infinite other ones that also claim to solve groundbreaking problems with AI.
In the past, it was hiring (or at least the appearance of hiring). But that's an expensive signal and not sustainable for long in a post-ZIRP period. On the other hand, bullshitting about AI (and before that, blockchain) is much cheaper in comparison, and seems to do just as good without actually needing to hire anyone or even pretend to.
Whether AI is actually used, or helps the bottom-line, is irrelevant (it's not possible to conclusively say whether a piece of code was authored by AI or not, so the subsequent narrative will be tweaked as necessary to fit the market conditions at the time).
Meanwhile all of your technical employees discover you are an absolute clown of an executive and the entire middle management corps is feckless in the face of the slightest amount of pressure.
Yup I realized tobi is a massive dum dum when he had Tobi.eth in his Twitter profile. Even Gary tan had it and as an aside that is why I'm scared of YC's future. All of those folks were on an easy blacklist for me on twitter. Lacks original thought and is sheep like in their thinking.
> We will add AI usage questions to our performance and peer review questionnaire
Not kidding, but I'm actually afraid people will check AI usage and start nagging us that:
> "You are slow because you don't use enough AI. Look at Tom (single 28 yo guy working every weekend on Adderall), he uses AI and he is fast. Gonna add a note to check up on your AI usage metrics in a month, hopefully it will improve".
Our company has Cursor, which I sometimes use, but 1. for lots of tasks, the most precise language is the programming language, 2. I don't love it, prefer other editors, and I go for in-browser search + AI.
If this letter was published by my CEO, I would either 1. ignore it, as CEOs are often out of touch when it comes to actual day to day work or they need to jump on the AI train to show how visionary they are even if they are aware of the limitations, 2. start looking for a job, because honestly, today it's a letter like this, in 3 months, you get a negative performance review because you didn't send enough queries to Cursor.
> for lots of tasks, the most precise language is the programming language
This is my problem with AI, or "vibe coding" or whatever you want to call it.
We already have many language(s) for expressing what we want the computer to do, and I find that natural language is the most difficult one to accomplish that goal.
Sure, LLMs might be able to whip up a fancy landing page and some basic API really quick, but when it comes to solving harder problems I find it incredibly frustrating and difficult to try and prompt in English, when the programming language I'm working in is already well suited to that task and is much more explicit in what I want.
Maybe I'm using it wrong, but it's way less cognitive overhead for me to just type for for x,y := range something {} than it is to try and prompt "Iterate over this list of values...etc."
I've found that the only prompts that actually work for generating code reliably are the ones where you already know exactly what code it will output -- where nearly every part of the "how" is precisely clear, it just needs to be compiled from human language into code. Writing this perfect prompt often turns out to be a waste of time. It's a classic specification problem. Human languages will let you under-specify a problem, but the result will not be pleasant.
> Gonna add a note to check up on your AI usage metrics in a month
Oh, oh and I almost forgot, I'm also going to need you to go ahead and come in on Monday/Friday too. Okay. We lost some people this week so we need to play catch up. Thanks!
I stopped using AI completely but it also feels like 80% of startups now are building AI focused products. It's a huge red flag for me now that the company is just riding a trend instead of building a well thought out product. Maybe that's how startups always were but it feels so nakedly cynical now.
I'm also not a fan of how productivity expectations seem to getting worse because the people in the business side read that this makes programmers 150% more productive. They probably do write more code but if the shelf life of that code is worse and there's less knowledge in the organization about that code because the engineer was leaning on a stochastic tool, how much more are you gonna spend maintaining and rewriting that software? Just seems like we're all super excited to make crappier software faster.
I don't want to be rude, but it feels like this is written by ramblingwordsaladGPT.
This message should be 10 to 20x shorter, to the point and clearly actionable. Instead it feels like we got the output of prompting "can you turn these few bulletpoints into a 3 season telenovella "?
or, as he would say: Clearly the CEO is demonstrating advanced implementation of his own AI-first philosophy by leveraging transformer-based language models to exponentially increase word count while simultaneously decreasing information density—a masterclass in modern corporate communication techniques that validates his thesis about AI integration being not merely optional but fundamentally requisite in today's rapidly evolving digital landscape. The medium is indeed the message.
> AI is now a fundamental expectation at Shopify: Effective use of AI is required for all employees, and it will be integrated into performance and peer reviews. AI is essential for accelerating learning, improving productivity, and advancing business outcomes.
> AI must be integrated into the GSD prototype phase: AI should dominate the prototype phase of projects, enabling faster creation and collaboration. Teams must demonstrate how they can use AI before requesting additional resources.
> Commitment to continuous learning and sharing: Shopify emphasizes self-directed learning of AI tools, sharing knowledge and experiences with the team, and fostering a culture of experimentation to stay ahead in the fast-evolving AI landscape.
> all of us who did have been in absolute awe of the new capabilities and tools that AI can deliver to augment our skills, crafts, and fill in our gaps.
Am I fundamentally missing something about the experience upper management has with AI versus the experience of a mid/senior level developer in the trenches? I also have access to Cursor and a few other LLMs.
They're handy for bouncing ideas around and for some simple tasks, but I've hardly felt like it was a force multiplier.
When it comes to coding, the amount of times it's hallucinated a function that didn't exist or used a deprecated implementation has felt like a net neutral at best.
> We will add AI usage questions to our performance and peer review questionnaire.
> AI must be part of your GSD Prototype phase.
I can understand asking your devs and other employees to try out AI in their workflows in an attempt to get some productivity gains. Sounds like a responsible CEO trying to milk some value out of his existing headcount. But, it sounds absolutely dystopian to tie performance metrics or changing the project planning process to be AI-centric.
> Am I fundamentally missing something about the experience upper management has with AI
Read the post again. Does this look like something a LLM would struggle to write?
A lot of people in upper management spend an awful lot of time writing meaningless word salad. They take a single sentence like "20% is getting fired" and turn it into a four-page essay about "strategic realignment", "streamlining growth", and "enabling the future of the $corp family". It doesn't really mean anything, so there's no way for a LLM to get it wrong.
If you haven't used AI to do complicated tasks where the details actually matter, I'm not surprised you'd get the idea that it is the greatest thing since sliced bread.
You've hit the nail on the head imo. The reason business leadership is stoked about this stuff is because they don't actually know anything about how their product gets built.
> But, it sounds absolutely dystopian to tie performance metrics or changing the project planning process to be AI-centric.
I don't think you can justify the use of "AI-centric" based on his letter. Maybe it's just a projection of fears? But nothing you quoted suggests 50% is more likely than 5%.
We joke about AI at work a lot but man if our CEO told me I had to start using it and that my performance would be judged on that - yeah I'm out. Why don't you let your developers decide what and how much AI they want or need to use.
In the past, they just call this a hiring freeze. The CEO doesn't want to admit that the company isn't doing great, so the excuse is now "you must explain why the job can't be done by AI".
I mean, if the job can be done by AI, he should go the extra step and lay off half of the employees, that's the sensible thing to do for any CEO. Why not do that today?
This is pretty dumb. I am looking forward to the news when the CEO is ousted and AI can't even save him.
> I’ve seen many of these people approach implausible tasks, ones we wouldn’t even have chosen to tackle before, with reflexive and brilliant usage of AI to get 100X the work done.
Ah yes, the 100x engineer delusion. Why not a 1000x?
If the engineer is 10x, the AI is 10x, and they buy a laptop with 10x flops, install lights with 10x the lumens in their office, give them coffee ground 10x finer, they could all easily be 100,000x engineers.
Reading that thread, it's depressing to see the amount of people @grok to summarize an article of maybe 3/4k words. I think it's disgusting that these same companies that have wrestled our attention spans from us the last 15 years, are now pushing "companion AIs" to quicken the rate at which we consume content on their platforms.
Wow Shopify has fallen so far down now it's just a trashy dumb company. No longer a tech company doing elite things. I wish companies didn't become mid so soon. Tobi really needs to step down and resign. It's a horrible look and tremendously bad for morale. And he seems to be feeling the pressure. When you have to expect people will spread it in bad faith, you really have some skeletons in your closet.
Was talking to my buddy who works there and they have a section in the performance review for how eagerly you're using AI. It honestly sounds like that organization is run by cargo cult clowns though.
The mandate has been set, anyone who isn't pasting LLM word salad into their performance review is wasting time and falling behind, as is anyone who directly reads the performance reviews which were submitted.
I strongly suspect this is now the case at most companies, and it's just a question of how much people admit it. I know multiple people who say they do it and a lot more who make plausibly deniable "jokes".
In a sane organization, if someone is producing, there is no reason to care whether they used LLMs constantly, not at all, somewhere in the middle, or only when it's a full moon on a Tuesday.
If managers are trying to push LLMs in order to squeeze more productivity out of developers and potentially replace them, why aren't developers just returning the favour?
Surely, the work output of most managers, which contains many more natural language artifacts than the typical coders' outputs, should be a better fit for an LLM than code.
>"Before asking for more Headcount and resources, teams must demonstrate why they cannot get what they want done using AI. What would this area look like if autonomous AI agents were already part of the team? This question can lead to really fun discussions and projects."
And there it is. None of us will be fired or laid off "because AI". The next person simply won't be hired. And then one day, that next person will be you or I.
Really this is a direct, perfect analogy to the industrial revolution. One small person operating a steam shovel could all of the sudden do the work (faster, with higher quality) that for all of human history required hundreds of strong men to do, and it changed the entire world practically overnight.
Businesses simply do not need the mass droves of SWEs who exist to type out CRUD code anymore. We will go the same way of the manufacturing workers of the 20th century, where the small percentage of those who could deeply specialize and adapt and master the processes were ok, but the vast majority never recovered, and ended up in minimum wage misery.
The primary audience here isn't average people, or even engineers at the company. The primary audience is investors - the idea is to trick idiots into dumping their money into this overvalued company rather than the infinite other ones that also claim to solve groundbreaking problems with AI.
In the past, it was hiring (or at least the appearance of hiring). But that's an expensive signal and not sustainable for long in a post-ZIRP period. On the other hand, bullshitting about AI (and before that, blockchain) is much cheaper in comparison, and seems to do just as good without actually needing to hire anyone or even pretend to.
Whether AI is actually used, or helps the bottom-line, is irrelevant (it's not possible to conclusively say whether a piece of code was authored by AI or not, so the subsequent narrative will be tweaked as necessary to fit the market conditions at the time).
Meanwhile all of your technical employees discover you are an absolute clown of an executive and the entire middle management corps is feckless in the face of the slightest amount of pressure.
Yup I realized tobi is a massive dum dum when he had Tobi.eth in his Twitter profile. Even Gary tan had it and as an aside that is why I'm scared of YC's future. All of those folks were on an easy blacklist for me on twitter. Lacks original thought and is sheep like in their thinking.
> > We will add AI usage questions to our performance and peer review questionnaire
whatever happened to "never go full retard"?
talk about being out of touch.
I'm down to try and use AI to enhance processes and encourage people to do that, but making it part of performance reviews is just silly.
[flagged]
> We will add AI usage questions to our performance and peer review questionnaire
Not kidding, but I'm actually afraid people will check AI usage and start nagging us that:
> "You are slow because you don't use enough AI. Look at Tom (single 28 yo guy working every weekend on Adderall), he uses AI and he is fast. Gonna add a note to check up on your AI usage metrics in a month, hopefully it will improve".
Our company has Cursor, which I sometimes use, but 1. for lots of tasks, the most precise language is the programming language, 2. I don't love it, prefer other editors, and I go for in-browser search + AI.
If this letter was published by my CEO, I would either 1. ignore it, as CEOs are often out of touch when it comes to actual day to day work or they need to jump on the AI train to show how visionary they are even if they are aware of the limitations, 2. start looking for a job, because honestly, today it's a letter like this, in 3 months, you get a negative performance review because you didn't send enough queries to Cursor.
> for lots of tasks, the most precise language is the programming language
This is my problem with AI, or "vibe coding" or whatever you want to call it.
We already have many language(s) for expressing what we want the computer to do, and I find that natural language is the most difficult one to accomplish that goal.
Sure, LLMs might be able to whip up a fancy landing page and some basic API really quick, but when it comes to solving harder problems I find it incredibly frustrating and difficult to try and prompt in English, when the programming language I'm working in is already well suited to that task and is much more explicit in what I want.
Maybe I'm using it wrong, but it's way less cognitive overhead for me to just type for for x,y := range something {} than it is to try and prompt "Iterate over this list of values...etc."
When I'm programming, I'm not thinking in English
I've found that the only prompts that actually work for generating code reliably are the ones where you already know exactly what code it will output -- where nearly every part of the "how" is precisely clear, it just needs to be compiled from human language into code. Writing this perfect prompt often turns out to be a waste of time. It's a classic specification problem. Human languages will let you under-specify a problem, but the result will not be pleasant.
> single 28 yo guy working every weekend on Adderall
is that better or worse than ketamine for weekend productivity
[flagged]
> Gonna add a note to check up on your AI usage metrics in a month
Oh, oh and I almost forgot, I'm also going to need you to go ahead and come in on Monday/Friday too. Okay. We lost some people this week so we need to play catch up. Thanks!
Tom is webscale
I stopped using AI completely but it also feels like 80% of startups now are building AI focused products. It's a huge red flag for me now that the company is just riding a trend instead of building a well thought out product. Maybe that's how startups always were but it feels so nakedly cynical now.
I'm also not a fan of how productivity expectations seem to getting worse because the people in the business side read that this makes programmers 150% more productive. They probably do write more code but if the shelf life of that code is worse and there's less knowledge in the organization about that code because the engineer was leaning on a stochastic tool, how much more are you gonna spend maintaining and rewriting that software? Just seems like we're all super excited to make crappier software faster.
I don't want to be rude, but it feels like this is written by ramblingwordsaladGPT.
This message should be 10 to 20x shorter, to the point and clearly actionable. Instead it feels like we got the output of prompting "can you turn these few bulletpoints into a 3 season telenovella "?
He’s following his own advice
or, as he would say: Clearly the CEO is demonstrating advanced implementation of his own AI-first philosophy by leveraging transformer-based language models to exponentially increase word count while simultaneously decreasing information density—a masterclass in modern corporate communication techniques that validates his thesis about AI integration being not merely optional but fundamentally requisite in today's rapidly evolving digital landscape. The medium is indeed the message.
This is gold.
Hahaha This is one of my favorite hackernews comments
I used AI to read it. Seemed only fair.
> AI is now a fundamental expectation at Shopify: Effective use of AI is required for all employees, and it will be integrated into performance and peer reviews. AI is essential for accelerating learning, improving productivity, and advancing business outcomes.
> AI must be integrated into the GSD prototype phase: AI should dominate the prototype phase of projects, enabling faster creation and collaboration. Teams must demonstrate how they can use AI before requesting additional resources.
> Commitment to continuous learning and sharing: Shopify emphasizes self-directed learning of AI tools, sharing knowledge and experiences with the team, and fostering a culture of experimentation to stay ahead in the fast-evolving AI landscape.
It's not rambling or word salad. It just tries to include some justification and explicit directions.
I have to question the literacy levels of people these days if a page of text is too much to handle.
He's just very excited from all the hype, you know how CEOs are.
More likely an investor, and I’m sure the company reviews are “protected” by blockchains.
> all of us who did have been in absolute awe of the new capabilities and tools that AI can deliver to augment our skills, crafts, and fill in our gaps.
Am I fundamentally missing something about the experience upper management has with AI versus the experience of a mid/senior level developer in the trenches? I also have access to Cursor and a few other LLMs.
They're handy for bouncing ideas around and for some simple tasks, but I've hardly felt like it was a force multiplier.
When it comes to coding, the amount of times it's hallucinated a function that didn't exist or used a deprecated implementation has felt like a net neutral at best.
> We will add AI usage questions to our performance and peer review questionnaire.
> AI must be part of your GSD Prototype phase.
I can understand asking your devs and other employees to try out AI in their workflows in an attempt to get some productivity gains. Sounds like a responsible CEO trying to milk some value out of his existing headcount. But, it sounds absolutely dystopian to tie performance metrics or changing the project planning process to be AI-centric.
I doubt every project fits that paradigm.
> Am I fundamentally missing something about the experience upper management has with AI
Read the post again. Does this look like something a LLM would struggle to write?
A lot of people in upper management spend an awful lot of time writing meaningless word salad. They take a single sentence like "20% is getting fired" and turn it into a four-page essay about "strategic realignment", "streamlining growth", and "enabling the future of the $corp family". It doesn't really mean anything, so there's no way for a LLM to get it wrong.
If you haven't used AI to do complicated tasks where the details actually matter, I'm not surprised you'd get the idea that it is the greatest thing since sliced bread.
You've hit the nail on the head imo. The reason business leadership is stoked about this stuff is because they don't actually know anything about how their product gets built.
> But, it sounds absolutely dystopian to tie performance metrics or changing the project planning process to be AI-centric.
I don't think you can justify the use of "AI-centric" based on his letter. Maybe it's just a projection of fears? But nothing you quoted suggests 50% is more likely than 5%.
We joke about AI at work a lot but man if our CEO told me I had to start using it and that my performance would be judged on that - yeah I'm out. Why don't you let your developers decide what and how much AI they want or need to use.
It's interesting that you single out developers. Why? Certainly that doesn't seem to be Tobi's primary concern.
In the past, they just call this a hiring freeze. The CEO doesn't want to admit that the company isn't doing great, so the excuse is now "you must explain why the job can't be done by AI".
I mean, if the job can be done by AI, he should go the extra step and lay off half of the employees, that's the sensible thing to do for any CEO. Why not do that today?
This is pretty dumb. I am looking forward to the news when the CEO is ousted and AI can't even save him.
> I’ve seen many of these people approach implausible tasks, ones we wouldn’t even have chosen to tackle before, with reflexive and brilliant usage of AI to get 100X the work done.
Ah yes, the 100x engineer delusion. Why not a 1000x?
If the engineer is 10x, the AI is 10x, and they buy a laptop with 10x flops, install lights with 10x the lumens in their office, give them coffee ground 10x finer, they could all easily be 100,000x engineers.
you forgot to divide by 10,000 for putting them in an open office and making them do jira though
Reading that thread, it's depressing to see the amount of people @grok to summarize an article of maybe 3/4k words. I think it's disgusting that these same companies that have wrestled our attention spans from us the last 15 years, are now pushing "companion AIs" to quicken the rate at which we consume content on their platforms.
It’s a recent trend on X, under almost every popular post you’ll find at least one person calling grok or Perplexity to explain.
I wonder how much of that is natural, and how much of that is Elon desperately trying to promote Grok by highlighting its use.
Exactly what our society needed, less reading and more tl;dr packaged and fed to us by tech giants.
The article was probably padded by an LLM to reach that 3/4k. Run it through an LLM summarization and hopefully you get back the original.
This can be considered classic now: https://marketoonist.com/wp-content/uploads/2023/03/230327.n...
Wow Shopify has fallen so far down now it's just a trashy dumb company. No longer a tech company doing elite things. I wish companies didn't become mid so soon. Tobi really needs to step down and resign. It's a horrible look and tremendously bad for morale. And he seems to be feeling the pressure. When you have to expect people will spread it in bad faith, you really have some skeletons in your closet.
Was talking to my buddy who works there and they have a section in the performance review for how eagerly you're using AI. It honestly sounds like that organization is run by cargo cult clowns though.
The mandate has been set, anyone who isn't pasting LLM word salad into their performance review is wasting time and falling behind, as is anyone who directly reads the performance reviews which were submitted.
I strongly suspect this is now the case at most companies, and it's just a question of how much people admit it. I know multiple people who say they do it and a lot more who make plausibly deniable "jokes".
In a sane organization, if someone is producing, there is no reason to care whether they used LLMs constantly, not at all, somewhere in the middle, or only when it's a full moon on a Tuesday.
If managers are trying to push LLMs in order to squeeze more productivity out of developers and potentially replace them, why aren't developers just returning the favour?
Surely, the work output of most managers, which contains many more natural language artifacts than the typical coders' outputs, should be a better fit for an LLM than code.
>"Before asking for more Headcount and resources, teams must demonstrate why they cannot get what they want done using AI. What would this area look like if autonomous AI agents were already part of the team? This question can lead to really fun discussions and projects."
And there it is. None of us will be fired or laid off "because AI". The next person simply won't be hired. And then one day, that next person will be you or I.
Really this is a direct, perfect analogy to the industrial revolution. One small person operating a steam shovel could all of the sudden do the work (faster, with higher quality) that for all of human history required hundreds of strong men to do, and it changed the entire world practically overnight.
Businesses simply do not need the mass droves of SWEs who exist to type out CRUD code anymore. We will go the same way of the manufacturing workers of the 20th century, where the small percentage of those who could deeply specialize and adapt and master the processes were ok, but the vast majority never recovered, and ended up in minimum wage misery.
[dupe] https://news.ycombinator.com/item?id=43611926