I work for a company that provides betting to consumers in Australia…
In Australia, the three biggest days for betting are typically:
- Melbourne Cup (horse race)
- Caulfield Cup (horse race)
- AFL grand final (sports)
Yesterday was the Caulfield Cup and we breezed through it…always a nice result. Because if we dont we lose lots of money…fast.
But the reason for the blog post is not to gloat (well…not entirely :-)). Its about an important thing about apps under Oracle – here’s my contention:
All that matters is your app.
End of story.
People talk about servers, CPU, flash, disk, bandwidth, memory speed, etc etc etc etc….but all those count for nought if your app is garbage.
To be honest, two years before we deployed our app into production…it was exactly that: garbage.
Transactions per second was measured in single digits before “something” (maybe database, maybe app server, maybe ..well..anything, would either max out or be serialised on something).
And the tragic thing is – as I said above – the resolution discussion was always about: servers, CPU, flash, upgrades, blah blah blah…
Now that *might* be valid, if your crap app is (a) crap, and (b) will always be crap. But so many craps app do not have to be crap…they can be changed.
Our app now handles about 1000 transactions per second – those same transactions that killed us at 10 transactions per second a few years back. That’s about double what it needs to be able to do.
So …. Did we purchase new servers ? Nope. New network ? Nope. In fact, its the same old hardware we always had…we just FIXED the app. And we didn’t crank up the complexity (eg, caching in the middle tier, de-normalising etc etc), we just did common sense stuff…make crappy SQL smarter, make smarter calls to the database, steer away from code reuse where it didn’t make sense (eg “Oh…I’ve got a routine that looks up one account, I can re-use that to look up 1000 accounts…” Er, no…you can’t)
Similarly, we’ve got strategies for the future that we think could get that up to around 2000 per second. But they are just strategies – no need to tune just for the sake of tuning. But we’ve got that insurance policy as we move forward. None of them involve hardware…they involve re-thinking the way we provide required business functions.
On top of that…we DO have hardware plans, because our current hardware has done such a good job, its almost out of support, and we can’t get spares.
But…its all about your app.
Recently, we asked the suppliers of one of our apps to provide us with a script to remove 1 million LOB objects their app’s logging created in one of our dbs.
They sent us a SQL script that deleted one (1!) row, for “testing”. Once we ran that and proved to them it was really, really gone – after a commit – they sent us another script who called the first one 1 million times, with a parameter (the PK) to determine which row to remove…
Ah yes: did I fail to mention they are java j2ee duhvelopers, who of course claim their code runs a “lot faster in db2/postgres/mysql”?
What else could they be? When will this kind be E.R.A.D.I.C.A.T.E.D from IT, once and for all?
That’s a good point which is often overlooked or ignored as the apparent expense in changing and testing the application was greater than buying new hardware!
We have improved our throughput dramatically with just two modifications, 1) a move from 32bit to 64bit OS, and 2) Changing the application. We’ve turned a 85% avarage load with 100% regular peaks to average 5% load with 12% peaks on the same hardware installation. Like yourself, we also have more possible changes to the app to aid both performance and migration to Oracle 11G (we’re so behind the curve!).
Our hardware refresh is in the pipeline too, just due to component support issues!