Thursday, October 15, 2009

Deconstructing "Everything is UNIX"

From Linux magazine, an article by Jeremy Zawodny: Everything is UNIX.

For me, this is an example of the "Miller meme" from Repo Man. "Suppose you're thinkin' about a plate o' shrimp. Suddenly someone'll say, like, "plate," or "shrimp," or "plate o' shrimp" out of the blue, no explanation." You go through life thinking you'll find something better than UNIX. The man pages still have the same bad examples and incomplete option descriptions as in 1984. The window systems aren't up to snuff for desktop use. People are still finding performance bottlenecks due to system architecture, whichever architecture your favourite UNIX flavour uses.

And then you realize, there really is not much more. Everywhere you look, the same point is reinforced over and over. Windows Vista comes along, OS X is built on a UNIX base, and you get past your long-standing resentment about X11. You read the documentation for the latest and greatest web framework, and you think to yourself, this isn't any more usable than those thrown-together man pages. Regardless of the fact that no system incorporates every possible performance boost, the systems taking advantage of the latest and greatest hardware are often descendants of UNIX; and if you're looking for mainframe-style reliability, again you're probably tinkering around in the UNIX space.

I find it interesting that UC Berkeley interrupts their Scheme-based intro to programming languages with an interlude to consider UNIX shell scripting. If the problem by combining a set of well-understood and reliable steps, don't reinvent the wheel. That's a good lesson for people developing frameworks and class libraries.

In the sample Google interview questions floating around the web, one recurring theme I've noticed is how to solve a fairly large-scale problem involving string and file manipulation. To which the obvious answer, IMHO, is to use the basic UNIX commands up to and including simple Perl scripts. For something quick and general-purpose, you're not going to implement a better string-finding algorithm than grep or a better simple sort than sort. Or a better I/O pipeline than UNIX pipes, or better filesystem traversal than find, or a better use of a huge memory space than a simple Perl hash. If you're writing complicated custom code, you'd better be solving harder problems than those.

(Hat tip to Tim Bray for the article.)

Saturday, August 8, 2009

The Humble PL/SQL Dot

Like many other languages, PL/SQL has its own "dot notation". If we assume that most people can intuit or easily look up things like the syntax for '''IF/THEN/ELSIF''', that means that first-timer users might quickly run into dots and want to understand their significance.

The authoritative docs on the dots is in the Oracle Database 11g PL/SQL Language Reference, in particular Appendix B, How PL/SQL Resolves Identifier Names. As we can see from these index entries, the subject is mentioned here and there throughout the manual:

dot notation, 1.2.5.2, B.2
for collection methods, 5.10
for global variables, 4.3.8.3
for package contents, 10.5

When I was in charge of the PL/SQL docs, I rewrote that Appendix B to try and make it more helpful, to give more examples and state what kinds of problems you could avoid by knowing this information.

Today, as a PL/SQL programmer, I would go even farther in simplifying the conceptual information in plain English, and positioning the knowledge as important for troubleshooting. Something like...

Names that use PL/SQL dot notation can have many different meanings, such as a procedure inside a package, a column inside a table, or an object owned by another schema. In code that you write or inherit, you might use one of these idioms extensively. Which can make for a nasty surprise when your code stops working because someone creates a new table, schema, package, etc. with the same name as one of your dotted names. So, read that Appendix B to understand all the variations and the precedence rules.

Some additional tips I'll pass on where you can add or remove dots to get out of trouble:

If you are coming from Perl, where you use an expression like string1 . string2 to concatenate, the corresponding PL/SQL expression is string1 || string2.

When you want to refer to two items in different scopes that happen to have the same name, use dot notation for one of the references. For example, if your PL/SQL procedure accepts a parameter ID and then queries a table that has a column ID, the query won't work properly when you use a WHERE clause like WHERE id = id. Instead, write it as WHERE id = procedure_name.id.

If you are writing code for use with the PL/SQL web toolkit, you'll find the URLs are simpler and cleaner if you can keep dots out of them. That means using standalone procedures where you might normally use a packaged procedure, because the package notation would put the whole package.procedure name in the URL. You can get the best of both worlds by coding up the package as usual, then making a standalone procedure that simply called the packaged procedure.

Thursday, July 9, 2009

vi, Still Relevant

I thought this was a good summary of why vi (or more accurately vim) is still a good choice for editing today:

Why, oh WHY, do those #?@! nutheads use vi?

One trick I learned from this article that I hadn't known: keep the cursor on the same line, but position that line at the top, middle, or bottom of the script via 'zt', 'zz', and 'zb' respectively. I am always ending up with the cursor at the bottom of the screen while runnning macros, and I want to look ahead to the next N lines, but all the other movement commands like Ctrl-d, H/M/L, etc. actually move the cursor. zt is a fast way to tell how many more times you'll want to run the same macro, for macros that process the current line and then move down one.

Sunday, June 21, 2009

The Humble PL/SQL Exception (Part 1a) - The Structure of Stored Subprograms

As I said in my previous post, The Humble PL/SQL Exception (Part 1) - The Disappearing RETURN, there are a lot of nuances surrounding exception handling. That post attracted some comments that I thought deserved a followup post rather than just another comment in response.

oraclenerd said (excerpted):

I'm going to have to disagree with you on the internal procedure (in the declaration section) paradigm. What about testing?
...
Now you have 2000 lines in your declaration section and 400 or so in the body. (I've seen it...really, I've seen it). Then you want to change one of those internal procedures...you can't test it without testing the entire thing.

This is to me one of the conundrums with any programming language; PL/SQL just has its own unique aspects.

Any set of procedures can be a testing challenge, since you can think of calling a procedure saying "do this", whereas you can think of calling a function as asking "tell me what would happen if you did this". Sure, it's easier to test a function, because you just have an in-memory return value that you can compare against some expected value, and don't have to worry about changing data by accident or doing rollbacks.

In the situations where I would use the technique of lifting code directly into internal procedures, the code is typically very short and easily verifiable: select count(*) into... followed by an IF test; a sequence of simple assignments that would be cumbersome to turn into one function per assignment; blocks of code that have already been verified in toto by virtue of tests run against the outermost procedure or function; that sort of thing.

After the code is modularized this way:

  • The outermost procedure can often be improved and/or optimized simply by reordering the inner procedure calls. For example, to do all the "should I quit early" tests before doing any substantial work. Or to put "set up complicated string value" right next to "use that string value".

  • Debugging steps such as removing blocks of code can be performed by commenting out single lines. (Much appreciated if those blocks of code themselves contain multi-line /* */ comment blocks.)

  • If a logic problem does turn up in the procedure, it's easier to figure out the part to look at if it has its own descriptive name. And a tool like ctags makes it handy to jump directly to that inner procedure.


The problem with a 2000-line declaration section, to me, is no worse than if you go the package route and must fix syntax errors or track down logic errors within a big package body. I use my favorite SQL*Plus hack to push that code off into separate files once it gets too big.

Anyway, my point is not to advocate using this kind of structure for every procedure or even every big procedure, rather to suggest that if you do restructure a procedure this way, using exceptions instead of return can make the work a little simpler, which is not a bad starting point if you are trying to get your head around how the flow control works for exceptions.




Brian Tkatch said:

Personally, i have done this (with a PACKAGE though) by RETURNing a value from the PROCEDURE, and then checking it in the main code:

IF Some_check() = 1 THEN RETURN; END IF;

It's not as pretty, but i found it to workout nicely.

Sure, I'm a big fan of function-oriented design; I think it's been sadly overlooked in the rush to make everything object-oriented.

Restructuring the early tests into functions does take a little work. And it requires some design decisions -- use Boolean values, 0/1, or named constants? how to ensure all cases are handled, maybe use the case statement? OK, I made my function return Boolean values and tested the values inside a case statement; but I can't test the function values via SQL queries, and maybe case will throw a runtime exception if some unexpected value like null comes back. The sample code in Brian's comment doesn't suffer from those problems, but this is the kind of thing I could imagine a junior programmer coding too elaborately.

That's not to say that the resulting code is any better or worse, just that it's now subject to potential new bugs and maintenance (have to document return values etc.) by introducing functions. That's the idea behind the use of exceptions in my previous post, to make the restructured code 100% the same as the original, not 99 44/100ths %.

Wednesday, June 17, 2009

The Humble PL/SQL Exception (Part 1) - The Disappearing RETURN

Exception handling in PL/SQL is a big subject, with a lot of nuances. Still, you have to start somewhere. Let's take one simple use case for exceptions, and see if it leads to some thoughts about best practices. (Hopefully, this is not the last post in this particular series.)

One common pattern I find in PL/SQL procedures is a series of tests early on...

if not_supposed_to_even_be_here() then
return;
end if;

if no_data_to_process() then
return;
end if;

if no_parameters_passed() then
print_basic_page();
return;
end if;
...

In real life, these tests tend to use hardcoded constants, queries, etc. that clutter up the procedure and make it hard to follow, rather than descriptive names as in the example above. One simple solution is to move the whole block into its own inner procedure:

check_if_supposed_to_be_here();
check_theres_data_to_process();
check_parameters_were_passed();

These procedures, declared with the procedure ... is ... syntax immediately before the begin of the main procedure, can access all the variables from the main procedure, so they typically don't require parameters. In most cases, you can just lift a block of confusing code from the main procedure, and turn it into an inner procedure with a descriptive name.

However, the return statement complicates things. When transplanted into an inner procedure, it loses its mojo. Instead of cutting short the entire procedure, it becomes essentially a no-op, because now it's at the end of a short inner procedure that was about to return anyway, back to the middle of the main procedure. The solution is to use an exception, which requires structuring the whole business like so:

create or replace procedure big_procedure as
num_rows number;
exception skip_normal_processing;
procedure check_data_to_process is
select count(*) into num_rows from data_table;
if num_rows = 0 then
raise skip_normal_processing;
end if;
end;
begin
check_data_to_process();
...do all the normal stuff if there really is data to process...
exception
when skip_normal_processing then null;
end;
/

Now if you detect some condition that means the procedure should bail out, it really will. If you can anticipate that your procedures might get lengthy enough to benefit from using inner procedures this way, you can plan ahead by using exceptions right from the start, instead of starting with return statements and then turning them into exceptions when you restructure the original straight-line procedure.

One thing that still bothers me is the way the control flow jumps around. The calls to the inner procedures jump backwards, and if any "stop! now!" conditions are triggered, control jumps forward all the way to the end of the main procedure. When I visualize such a structure, it reminds me just a little of spaghetti code. I know that on paper, all is as it should be -- all the reusable / modular code is separated out at the front, all the error handling and termination code is separated out at the end. I just would like to see more real-life cases where such structure saves on maintenance and debugging time, before passing final judgment.

Saturday, June 13, 2009

When Backwards Compatibility Goes Too Far

I couldn't help but notice this new article, about holdovers from the earliest days of DOS and even CP/M still showing up in Windows-based development:

Zombie Operating Systems and ASP.NET MVC

Personally, I really enjoyed working on the IBM C/C++ compiler back in the day, targeting Windows 95. They licensed the Borland resource editor and I adapted the RTF-format online help, with no RTF specs, just trial and error. For me, that was the apex of Windows-based technology. Once the web came along, everyone forgot how to write usable desktop apps.

Monday, June 1, 2009

Deconstructing the iPod Shuffle UI

The new buttonless iPod Shuffle, which moves all the controls onto the headphone cord, is taken to task in this article:

The new iPod shuffle: Button, button, who's got the button?

Now, I'm a recent purchaser of the previous Shuffle model, and intuitively I prefer the Play/Pause/Forward/Back/Up/Down controls of that previous model. But I like to take contrarian positions sometimes too, so let me see if I can defend the new Shuffle from a UI point of view.

One thing I notice with the older square Shuffle, is that each time I clip it on requires a mental orientation session. With this jacket, it's clipped on this side; with that shirt, it's clipped on the other side with the controls upside down and reversed; with a t-shirt, it's probably clipped on the bottom, and so the controls are 90 degrees from either of the previous orientations.

This takes a few seconds, which isn't a long time, but it's annoying if the goal is for the player to vanish from my consciousness. I quickly straighten out which button skips to the next track, but if I'm too distracted to visualize how the Volume Up/Down buttons are situated, I might find which is which by trial and error while driving. Plus I may clip it in a different orientation if I decide it's more important afor this outing to turn it on/off, vs. having easy access to the audio jack.

Now, there are a few ways this could be addressed. Some sort of Braille-like bumps to make the buttons distinctive to the touch. A rotating control area -- probably wouldn't help without the bumps. An accelerometer to make the buttons swap functions when it's clipped up, down, or sideways. But all of those just drive the cost of parts up, and still take some getting used to.

Having the controls on the headphones, regardless of whether you like earbuds or not, does make the interaction identical however you clip on the actual player -- in the car, walking, working out, in your pocket, upside down during weightless space training, and so on. (Actually, could you even plug it in to a car stereo without some kind of extra adapter cord with the new controls?)

I think there's a much broader lesson to draw here about Apple's UI principles. Lately, it's introduced lots of things like the Shuffle, the Apple Remote, the single giant button trackpad. The theme here is "Fitts's Law". Broadly speaking, it means that it's easier to select something UI-wise if it's big and nearby. I like the AskTog description better than the Wikipedia article:

AskTog: First Principles of Interaction Design

You see this principle applied a lot in Apple's software UIs. Think about the magnification of icons in the dock as you mouse over them. The thing you might select is directly under the mouse, and it gets bigger so it's hard to miss. Or the software keyboard on the iPhone, where you tap a letter that's very small, yet the screen displays a magnified version of each letter as you type it, so you can slide your finger to adjust if you didn't hit quite the right spot.

Now, the principle is bleeding over to the hardware side of UIs. IBM had the little mouse nub for Thinkpads. Right under your fingers as you were typing, but small and hard to hit or control. The Apple trackpad that's all button, all the time, is a purer expression of Fitts's law. You could even say it embodies a third related principle, that in addition to making the item easy to select, you arrange the UI so that there's only one logical action for this one giant button to perform. So you're not clicking on different parts of the trackpad to do different things, it's always "select"; other gestures, such as double-clicking or the two-finger swipe, are physically different, but again work regardless of the location.

We see this new principle spreading in the world. For example, the video players you see on web sites where the initial screen is a single big Play button. We saw it a while back in hardware with that third-party gizmo that was a dial with a single button. These days when I use a second monitor for Photoshop, I realize it's part of the same trend; this monitor is really a UI control with a single selection -- its own On/Off button, to select when entering or leaving the mode where I need more screen space for menus and palettes.

But anyway, back to this new Shuffle. We've got the controls that are situated in the same location regardless of how the device is positioned and oriented. We've got a single action that the controls perform, with less-frequent variations that depend not on location but on a different gesture (double- or triple-clicking, press and hold). So, although I still like my el-cheapo headphones and cassette adapter for the car, the new Shuffle UI makes sense if considered part of a pattern.

Wednesday, January 14, 2009

Comic-Based Communication

These days, there are as many styles of documentation as there are of programming. Structured docs (waterfall model), topic-based writing (object-oriented development), less formal styles based around wikis (agile coding). Another one that I haven't seen given a name, is what I think of as comic-based communication.

If you grew up with comic books, fingers poised next to "continued on 3rd page", following the narrative jumping from panel to panel, then you probably don't have a problem understanding this style. Cinematic examples would be the first Hulk movie or the TV series 24, where the action sometimes splits into multiple frames that all run side by side for a few seconds.

I haven't found such presentation compelling in movies or TV. The origin story of comic-based communication is based around the printed page. So its heroic destiny probably lies with static images, either on paper or computer screen.

One buzz-worthy example is Google's overview of the Chrome browser. It was, ah, inked by well-known illustrator Scott McCloud. Scott's "Understanding Comics", "Reinventing Comics", and "Making Comics" are kind of the "Mythical Man Month" of the medium. (I could swear I read one of them, probably the last, all the way through on Scott's blog but I can't find it now.) Here's a talk from the 2005 TED conference with some history and examples. (Start at 7:43 to skip the biographical stuff.)

In the Google overview, we see a lot of principles that it's hard to do justice to in a blog post. There's Tuftean multi-dimensionality -- characters and dialog bubbles are positioned around or even interact with charts, symbols, and and bits of screen imagery from the Chrome UI. The "speakers" aren't intimidating because they look like cartoon characters. Their rotoscoped look also means we can't pick holes in their appearance. It someone looks geeky or sloppily dressed, hey that's just a mild caricature by the artist.

The overview touches on subjects that makes software companies nervous to address in documentation -- things are slow, they crash, they're insecure -- but illustrate those ideas with witty, exaggerated graphics. What competitor is going to cry foul, what customer is going to gripe, that you're slighting someone else's product or exaggerating your own merits? It's supposed to have a tinge of absurdity after all. This style could be used for conceptual information where you're just trying to impress certain points on people, and on troubleshooting information where you can exaggerate things that go wrong and responses to problems. I don't know if it would work as well for task or reference information. The presentation employs the same mnemonic tricks that good students use intuitively, to hook important facts and relationships to memorable images.

Every communication style needs its authoring and presentation tools. (Word, Powerpoint, Framemaker, Acrobat, Wiki, Wordpress, Firefox, and so on.) For authoring in comic style, there's the application Comic Life, for both OS X and Windows. You can put together a PDF, web presentation, various kinds of images, or Quicktime movie. You can lay out pages with various panels familiar from the comic book days, and place thought or speech balloons, letter boxes, and stylized logo/title text.

The essence of each panel is an image, which can be dragged, scaled, cropped and rotated. This presentation is fascinating to me, because I've spent so much time on traditional photography. In a typical photographic presentation, you need to pick the best pictures that are perfect in every detail; but don't use too many, because they'll be viewed one at a time, and your audience will get bored if pictures are too similar or the transitions are too fast or slow.











With a comic-style presentation, you only need to find an interesting section of the picture with the same general shape as the panel. It can be a narrow sliver or an irregular shape. The rest of the picture (which in real life might be overexposed or blurry) is left to the reader's imagination. Page layouts let you present similar pictures in the form of a narrative, so no need to pick a single best one. Or you can float foreground pictures over a background image, either with a similar theme or a stark contrast. Text presented as speech balloons or captions in a letterbox carries a different tone than bullet points on a Powerpoint slide; again, you can exaggerate, understate, and leave out details for the reader's imagination to fill in.

The examples on the right come from a trip through the Grand Canyon and Bryce and Zion national parks in Utah. It's been more than a year and I'm not nearly finished even a first pass through the pictures to put together a traditional slideshow. But with the slideshow reimagined as a comic book, new perspectives and narrative possibilities jump out.

Friday, January 9, 2009

You've Got to Fight for Your Invoker's Rights

This post is about a PL/SQL feature that doesn't get enough respect, "invoker's rights".

First off, what's its real name? Depending on the source, you'll see the feature name spelled "invoker's rights", "invokers' rights", or "invoker rights". That makes a difference -- you'll get different results in Google depending on what combination of singular, plural, and possessive you use. And to be strictly correct, shouldn't you hyphenate the adjective form, that is, refer to things like "invoker's-rights subprograms"? I'm not even going to go there. Although I personally call the whole feature "invoker's rights" to agree with the PL/SQL manual, I'll try to make it through the rest of the post without using that phrase at all.

After all that, the syntax for the feature is AUTHID CURRENT_USER. Although there is an opposite AUTHID DEFINER clause, since that's the default, you would probably only ever use the CURRENT_USER form of the clause. It might get more love (and be easier to search for) if we called them "CURRENT_USER subprograms" or some such.

The mechanics of this feature are relatively easy to see. You can find the details in the PL/SQL manual, or get a tutorial that points out some of the nuances, or this Steven Feuerstein article with some best practices.

But still, how does that play out in the real world?

Well, you may have a PL/SQL application that goes through several versions, with each version in a different schema on the same database server -- MYAPPV1, MYAPPV2, MYAPPV3, etc. Or maybe there are slightly different incarnations of the app for different business groups. When you make a fix or improvement to one procedure or function, if that change is applicable for the older or alternate versions, you need to recompile the procedure or function in each schema. If program units that needed periodic upgrades were put into a central schema and declared with AUTHID CURRENT_USER, making the change in one place would propagate the improvements to all versions of the application. You could hardcode the central schema name in all calls to the CURRENT_USER subprograms, or create synonyms and pretend they're in the same schema as the rest of the code.

The trick then would be to identify which procedures and functions are the best candidates for this treatment. Logically, they should be small simple subprograms that have relatively few dependencies, so they won't break if your application gains or loses tables, columns, or other subprograms as it evolves. They should also be subprograms that you could predict would be important to fix or upgrade in the future -- ones that could give a big speedup when you learn some tuning technique or use some feature in the latest database release; ones that implement security checks that you'll make more stringent as security practices evolve; ones that display common UI elements that you can make more usable and accessible over time.

Of course, this type of foresight is easier said than done. Sure, just take all your slowest, buggiest subprograms with the worst output, and separate them out. But you might be able to retrofit such changes at a reasonable point. I'd suggest evaluating whether you could make use of AUTHID CURRENT_USER around the time of the 3rd instance or version of the application on the same server.