The dead horse theory - AI edition
The Tribal wisdom of the Indians, passed on from generation to generation, says that, "When you discover that you
are riding a dead horse, the best strategy is to dismount."
However, in modern AI-development, far more advanced strategies are often employed, such as:
- Discover specific prompts being better, so encourage more prompt-formalisms (ignore any accidental
similarities with random existing things such as programming languages, although without e.g. the precision)
- Redefine any data - licensed or not - to be fair use
- Pipeline prompts and results recursively until desired output is reached
- Seeing consistency being unfortunately fundamentally impossible, turn every stone for anything to blame
except the technology itself
- Claim being super-close to AGI and take money for the final step
- Re-classifying what AGI means
- Re-classifying your company as having achieved AGI
- Claim you have AGI and take money _not_ to release it
- Keep bundling different models together - the more the merrier!
- Build data centres to provide enough compute for your models and their now infinite prompt loops
- Build power stations to power your totally reasonably power hungry data centres
- Open up for your models to control other software on your users' hardware, regardless of their inherent
reliability. What could possibly go wrong?
- Fire all your employees in favour of AI
- Seed / invest more money
- Rehire as your previous strategy of firing in favour of AI didn't (yet) pan out. You're just too ahead of
the curve!
- Implement AI-features everywhere - you know they want it!
- Seed / invest even more money
- Buy an open source browser fork
- Buy an open source editor fork
- Buy a competitor
- If the outcome is bad - then the prompt must be wrong. Recommend changing the prompt until til work (repeat
the holy mantra: It can't be _that_ stupid)
(Inspired and/or blatantly plagiarized from theguardian.com (oldest version
found so far)