Severin asked if I thought OO had fulfilled its promise.
Well, that's a question that almost by nature is impossible to answer.
Lots of colleges are teaching object-oriented programming right from the start, and lots of programmers adopted that style once they saw it becoming popular, so "widely adopted", sure.
For "fulfilled its promise", what exactly is the promise? If we think of people 100 years ago chopping down trees with axes and handsaws, and then the chainsaw came along, obviously a lot more people cut down a lot more trees today. But the professionals have in many cases cut down all the trees, which turned out not to be such a great idea. And the amateurs prune the trees in their back yards, but sometimes they cut off their own fingers as well.
When I run through this analogy in my mind, I can't help thinking that in the world of computing, our "chainsaw" -- whatever is the latest programming or implementation fad -- is often marketed both as a children's toy and as a personal grooming aid. :-)
OO style is a natural outgrowth when you're reaching the limits of other styles. For example, if you're doing functional programming and finding yourself writing slight variations of the same code over and over. Or, in the Oracle context, if you're writing procedural PL/SQL and find yourself wishing that you could plug in a variable at a spot where one isn't allowed, or you're writing the same code twice to deal with variables of different types.
But when I see people tackling straightforward problems requiring small amounts of code, sometimes I feel like it's overkill. I've seen plenty of code where someone contrived a hierarchy where one wasn't really needed, or where a lot of team effort went into making the hierarchy deeper, rather than coding the lowest-level classes that would actually do something useful.
Think of the common OO idiom of hiding all member variables behind getXYZ() and setXYZ() methods. If your project is going to employ tools that generate and compile source code dynamically, or a debugger that's going to hook in its own get and set methods ahead of the real ones, that technique makes perfect sense. It's enabling extra functionality, it's planning ahead to avoid problems in scalability and maintenance. But many programs are written to solve some limited problem, and the code is never going to be reused on such a scale, in which case the extra typing might not serve any purpose.
I like the way UC Berkeley does it in their introductory CS course (link goes to UCB course podcasts on iTunes). They run through various styles of programming, and only when they've demonstrated some of the limitations that OO is intended to solve do they introduce OO style. At that point, the assignments involve actually writing the guts of an OO system, so even if you're just running trivial OO code you're seeing how it all works under the covers.
Saturday, March 22, 2008
Tuesday, March 11, 2008
Can Your Programming Language Do This... Or That...
Just wanted to point out this "Joel On Software" article, Can Your Programming Language Do This?. It's a nice concise opinion that summarizes why, given an arbitrary algorithmic problem, I'm personally more likely to turn to Javascript (or Perl or another scripting language with the same feel) than Java.
The key for me with Java is this statement from Joel:
I think of Java's OO nature as "hard" object orientation, in that choices are relatively set in stone based on how class files are organized and how the code flows. You can make variants of a class that do whatever you want, but you might have to do a lot of advance planning.
Conversely, in Perl, you can do like so:
sub something
{
...one set of code...
}
sub something
{
...another set of code...
}
You can call something(), and it picks up the last definition. Which makes it very easy to change the behavior of a program by overriding a method right before you call it, or by swapping the order of 'require' lines, or what have you. I think of this as "soft" object orientation, because it may be less amenable to mathematical proof of correctness, but it requires less planning and is less likely to produce cascades of syntax errors. Define it, call it, redefine it, call the new thing.
Most languages have some sort of dynamic execution facility. For example, with PL/SQL it's the EXECUTE IMMEDIATE statement and the OPEN-FOR statement. Any time you see a language concatenating strings and executing the result, you need to watch out for potential security problems. Who gets to enter that string, could it be rigged to execute multiple statements instead of one, could different names be substituted so it operated on some object you didn't expect. PL/SQL has the USING clause to guard against such problems by using bind values. Still, the Javascript approach of passing around functions as parameters feels comfortable, because all the variations of those functions are defined in your own source code rather than combined from arbitrary strings.
The key for me with Java is this statement from Joel:
Java required you to create a whole object with a single method called a functor if you wanted to treat a function like a first class object. Combine that with the fact that many OO languages want you to create a whole file for each class, and it gets really klunky fast.
I think of Java's OO nature as "hard" object orientation, in that choices are relatively set in stone based on how class files are organized and how the code flows. You can make variants of a class that do whatever you want, but you might have to do a lot of advance planning.
Conversely, in Perl, you can do like so:
sub something
{
...one set of code...
}
sub something
{
...another set of code...
}
You can call something(), and it picks up the last definition. Which makes it very easy to change the behavior of a program by overriding a method right before you call it, or by swapping the order of 'require' lines, or what have you. I think of this as "soft" object orientation, because it may be less amenable to mathematical proof of correctness, but it requires less planning and is less likely to produce cascades of syntax errors. Define it, call it, redefine it, call the new thing.
Most languages have some sort of dynamic execution facility. For example, with PL/SQL it's the EXECUTE IMMEDIATE statement and the OPEN-FOR statement. Any time you see a language concatenating strings and executing the result, you need to watch out for potential security problems. Who gets to enter that string, could it be rigged to execute multiple statements instead of one, could different names be substituted so it operated on some object you didn't expect. PL/SQL has the USING clause to guard against such problems by using bind values. Still, the Javascript approach of passing around functions as parameters feels comfortable, because all the variations of those functions are defined in your own source code rather than combined from arbitrary strings.
Subscribe to:
Posts (Atom)