(2024-06-19) I Will Fucking Piledrive You If You Mention Ai Again

Nikhil Suresh: I Will Fucking Piledrive You If You Mention AI Again. I myself have formal training as a data scientist, going so far as to dominate a competitive machine learning event at one of Australia's top universities and writing a Master's thesis where I wrote all my own libraries from scratch.

I. But We Will Realize Untold Efficiencies With Machine L-

by 2021 I had realized that while the field was large, it was also largely fraudulent.

Most of the market was simply grifters and incompetents (sometimes both!) leveraging the hype to inflate their headcount so they could get promoted, or be seen as thought leaders.

You see, while hype is nice, it's only nice in small bursts for practitioners. We have a few key things that a grifter does not have, such as job stability, genuine friendships, and souls. What we do not have is the ability to trivially switch fields the moment the gold rush is over, due to the sad fact that we actually need to study things and build experience. Grifters, on the other hand, wield the omnitool that they self-aggrandizingly call 'politics'. That is to say, it turns out that the core competency of smiling and promising people things that you can't actually deliver is highly transferable. (soft skills)

The data science jobs began to evaporate, and the hype cycle moved on from all those AI initiatives which failed to make any progress, and started to inch towards data engineering.

At least, I thought, all that AI stuff was finally done, and we might move on to actually getting something accomplished.
And then some absolute son of a bitch created ChatGPT, and now look at us.

II. But We Need AI To Remain Comp-

Unless you are one of a tiny handful of businesses who know exactly what they're going to use AI for, you do not need AI for anything

Artificial intelligence, as it exists and is useful now, is probably already baked into your businesses software supply chain.

Consider the fact that most companies are unable to successfully develop and deploy the simplest of CRUD applications on time and under budget.
This is a solved problem - with smart people who can collaborate and provide reasonable requirements.

we do this crazy thing where we solve problems together. I may not know anything about the nuance of building analytics systems for drug rehabilitation research, but through the power of talking to each other like adults, we somehow solve problems.

But most companies can't do this, because they are operationally and culturally crippled. The median stay for an engineer will be something between one to two years, so the organization suffers from institutional retrograde amnesia.

Whenever there is a ransomware attack, it is revealed with clockwork precision that no one has tested the backups for six months and half the legacy systems cannot be resuscitated - something that I have personally seen twice in four fucking years. Do you know how insane that is?

Most organizations cannot ship the most basic applications imaginable with any consistency, and you're out here saying that the best way to remain competitive is to roll out experimental technology that is an order of magnitude more sophisticated than anything else your I.T department runs, which you have no experience hiring for, when the organization has never used a GPU for anything other than junior engineers playing video games with their camera off during standup

How about you remain competitive by fixing your shit?

III. We've Already Seen Extensive Gains From-

Yesterday, I was shown Scale's "2024 AI Readiness Report". It has this chart in it:

How stupid do you have to be to believe that only 8% of companies have seen failed AI projects? We can't manage this consistently with CRUD apps and people think that this number isn't laughable? Some companies have seen benefits during the LLM craze, but not 92% of them.

A friend of mine was invited by a FAANG organization to visit the U.S a few years ago. Many of the talks were technical demos of impressive artificial intelligence products. Being a software engineer, he got to spend a little bit of time backstage with the developers, whereupon they revealed that most of the demos were faked.

Another friend of mine was reviewing software intended for emergency services, and the salespeople were not expecting someone handling purchasing in emergency services to be a hardcore programmer. It was this false sense of security that led them to accidentally reveal that the service was ultimately just some dude in India.

IV. But We Must Prepare For The Future Of -

I see executive after executive discuss how they need to immediately roll out generative AI in order to prepare the organization for the future of work

I am not in the equally unserious camp that thinks that generative AI does not have the potential to drastically change the world. It clearly does.

it seems that we are heading in one of three directions.

The first is that we have some sort of intelligence explosion, where AI recursively self-improves itself, and we're all harvested for our constituent atoms. (AGI)

However, defending the planet is a whole other thing, and I am not even convinced it is possible. In any case, you will be surprised to note that I am not tremendously concerned with the company's bottom line in this scenario, so we won't pay it any more attention.

A second outcome is that it turns out that the current approach does not scale in the way that we would hope, for myriad reasons... In this universe, some industries will be heavily disrupted, such as customer support.

In the case that the technology continues to make incremental gains like this, your company does not need generative AI for the sake of it.
If you don't have a use case then having this sort of broad capability is not actually very useful

The only thing you should be doing is improving your operations and culture, and that will give you the ability to use AI if it ever becomes relevant.

The final outcome is that these fundamental issues are addressed, and we end up with something that actually actually can do things like replace programming as we know it today.
In the case that generative AI goes on some rocketship trajectory, building random chatbots will not prepare you for the future. Is that clear now? Having your team type in import openai does not mean that you are at the cutting-edge of artificial intelligence.

Teaching your staff that they can get ChatGPT to write emails to stakeholders is not going to allow the business to survive this.

You either need to be on the absolute cutting-edge and producing novel research, or you should be doing exactly what you were doing five years ago with minor concessions to incorporating LLMs. Anything in the middle ground does not make any sense

V. But Everyone Says They're Usi-

there are those that have drunk the kool-aid. There are those that have not. And then there are those are that are trying to mix up as much kool-aid as possible. I shall let you decide who sits in which basket.

VI.

The crux of my raging hatred is not that I hate LLMs or the generative AI craze... No, what I hate is the people who have latched onto it, like so many trailing leeches, bloated with blood and wriggling blindly.

They (the hype machine) know exactly what their target market is - people who have been given power of other people's money because they've learned how to smile at everything, and know that you can print money by hitching yourself to the next speculative bandwagon.

My consultancy has three pretty good data scientists - in fact, two of them could probably reasonably claim to be amongst the best in the country outside of groups doing experimental research, though they'd be too humble to say this. Despite this we don't sell AI services of any sort. The market is so distorted that it's almost as bad as dabbling in the crypto space.

This entire class of person is, to put it simply, abhorrent to right-thinking people. They're an embarrassment to people that are actually making advances in the field, a disgrace to people that know how to sensibly use technology to improve the world, and are also a bunch of tedious know-nothing bastards that should be thrown into Thought Leader Jail until they've learned their lesson, a prison I'm fundraising for. Every morning, a figure in a dark hood, whose voice rasps like the etching of a tombstone, spends sixty minutes giving a TedX talk to the jailed managers about how the institution is revolutionizing corporal punishment, and then reveals that the innovation is, as it has been every day, kicking you in the stomach very hard.


Edited:    |       |    Search Twitter for discussion