Severin asked if I thought OO had fulfilled its promise.
Well, that's a question that almost by nature is impossible to answer.
Lots of colleges are teaching object-oriented programming right from the start, and lots of programmers adopted that style once they saw it becoming popular, so "widely adopted", sure.
For "fulfilled its promise", what exactly is the promise? If we think of people 100 years ago chopping down trees with axes and handsaws, and then the chainsaw came along, obviously a lot more people cut down a lot more trees today. But the professionals have in many cases cut down all the trees, which turned out not to be such a great idea. And the amateurs prune the trees in their back yards, but sometimes they cut off their own fingers as well.
When I run through this analogy in my mind, I can't help thinking that in the world of computing, our "chainsaw" -- whatever is the latest programming or implementation fad -- is often marketed both as a children's toy and as a personal grooming aid. :-)
OO style is a natural outgrowth when you're reaching the limits of other styles. For example, if you're doing functional programming and finding yourself writing slight variations of the same code over and over. Or, in the Oracle context, if you're writing procedural PL/SQL and find yourself wishing that you could plug in a variable at a spot where one isn't allowed, or you're writing the same code twice to deal with variables of different types.
But when I see people tackling straightforward problems requiring small amounts of code, sometimes I feel like it's overkill. I've seen plenty of code where someone contrived a hierarchy where one wasn't really needed, or where a lot of team effort went into making the hierarchy deeper, rather than coding the lowest-level classes that would actually do something useful.
Think of the common OO idiom of hiding all member variables behind getXYZ() and setXYZ() methods. If your project is going to employ tools that generate and compile source code dynamically, or a debugger that's going to hook in its own get and set methods ahead of the real ones, that technique makes perfect sense. It's enabling extra functionality, it's planning ahead to avoid problems in scalability and maintenance. But many programs are written to solve some limited problem, and the code is never going to be reused on such a scale, in which case the extra typing might not serve any purpose.
I like the way UC Berkeley does it in their introductory CS course (link goes to UCB course podcasts on iTunes). They run through various styles of programming, and only when they've demonstrated some of the limitations that OO is intended to solve do they introduce OO style. At that point, the assignments involve actually writing the guts of an OO system, so even if you're just running trivial OO code you're seeing how it all works under the covers.