TOBEY compiler technology a few years ago. (And maybe still, for all I know.)
var x = a.b.c.d.e;
x.something = 0;
x.something_else = 'foo';
is actually modifying attributes of the original a.b.c.d.e object.
Intuitively, you can see that you're saving bytes in the program, which means less time to transmit the script file, less time to parse, and saving space in the browser cache which could avoid re-fetching some page, image, or other resource later. But there are performance implications at a deeper level too.
Array notation and object notation are effectively interchangeable. x.foo works the same as x['foo']. x, x, x, etc. are not necessarily contiguous in memory, they're just entries in a hash table whose keys happen to be integers. Looping through an array involves a hash table lookup each time, not just incrementing a pointer.
var a, x;
var a, y;
var a, z;
a = 1;
// a is found in current scope, requiring just one hash table lookup.
x = 'hello';
// x is not found in current scope;
// x is not found in parent scope;
// x _is_ found in grandparent scope.
// Meaning 3 hash table lookups to figure out where x is.
When all these slight variations in efficiency get put inside loops or frequently called functions, that's when the performance can start to drag. That's why you see people doing things like:
for (var i = 0, limit = some_object.length; i < limit; i++) ...
Getting back to the notion of our first example, when you store a pointer to some deeply nested object and refer directly to that object, multiple times:
var it = x.y.z.style;
it.marginTop = "0em";
it.marginBottom = "1em";
it.marginLeft = "1em";
it.marginRight = "2em";
you're saving all those lookups, both for scope and traversing the hash tables of object members, for each reference.
At the DOM level, you've got a property element.childNodes and a method element.hasChildNodes(). Seems intuitive that checking for the existence of child nodes should be faster than returning the actual nodes, right?
Remember, childNodes is just giving you a pointer to a data structure that is being maintained continuously, not making a copy of anything, or assembling the structure on-demand. So the overhead in each case is constant. That data structure could be empty, which is why the has*() method could be useful. There's a distinction sometimes between "this object exists" and "this object contains something useful", so you wind up doing things like:
els = document.getElementsByTagName('a');
if (els && els.length > 0) ...
// Must test that the assignment succeeded,
// _and_ that what came back wasn't an empty structure