Late for Earth Day


, , , , ,

I’m sorry Dave, I can’t do that.

So the good ship Industrial Civilization ploughs its course through the coal and oil of the ages, screaming toward the twin icebergs of climate change and peak oil with all engines ablaze. Aboard it we scream, “Stop! Turn! Do something!” but all for naught. Why?

Not for lack of awareness, that’s for sure. I still encounter people who don’t believe the problems are real. They exist. In droves. But generally, they’re people whose sense of proportionality differs because they’ve already focused on other problems. Ones no less real, and far more immediate. The very problems that are why the expansion of industrial civilization happened the way it did, and had the benefits it had. Poverty, disease, hunger. Fundamental lacks of the foundation levels of Maslow’s hierarchy.

We can’t leave the carbon in the ground. No-one can. The oil will burn. The climate will change. The Anthropocene catastrophe will not be averted.

Of course, industrialization didn’t solve these problems. But in short time spans and localized areas, it helped a lot. And it still does. That expenditure of fossil fuels is providing direct benefit. It has costs, yes — the externalities are awful, and grow worse with time. Everything has a price. But it’s worth while to not mistake what we have bought with that price. And what we would reap if we tried to pay it down early.


Society is like a gas, encapsulated in the elastic bubble of a nation. The more energetic the gas, the farther outward it can press its influence, and the more space (affluence) is available to its residents. Drain that energy away and it comes tumbling in. Do this too fast and you create internal turbulence. Analogously, diminish the economy too fast and you generate internal unrest.

Diminish the energy use rate too soon, and you strand resources. This is the prisoner’s dilemma perfected. Whoever did not cut back their power is in a position to overwhelm those who did, via either politics or force, and take their resources. If such a state does not already exist at the time, one will rapidly find itself created to take advantage of the opportunity. We can’t leave the carbon in the ground. No-one can. The risks to any nation of significance that tries are too great, and would only end in it losing and its policies being reversed anyway.

Insignificant nations can viably change directions to start building “post-carbon” economies. Some of them may be the powerhouses of the next round. But right now, they aren’t called insignificant for nothing. The oil will burn. The climate will change. The Anthropocene catastrophe will not be averted.


So. What now?

Quick & Dirty Custom Fonts


, , ,

$ mkdir ~/.fonts

$ cat > ~/.fonts/


fc-cache && mkfontscale && mkfontdir
xset fp rehash

$ cat >> ~/.xinitrc  #(actually, no, edit and insert before last line)

xset +fp ~/.fonts

$ chmod +x ~/.fonts/

$ xset +fp ~/.fonts

$ cp ~/Downloads/*.ttf ~/.fonts/

$ ~/.fonts/

Ghost of configurations past


, , ,

What ever happened to the eink screen on my netbook? I hadn’t used it in months… Not just because I got busy and the weather got cloudy, but because it lacked polish. It worked, but it didn’t Just Work. There were commands to remember, and options to configure. It involved telnet. It just wasn’t production-level shiny. But now my headaches are back, which got me back to thinking and tinkering.

The setup as it stands

Execute on Kindle DX directly: (Only if not already set)

Plug in Kindle DX.

Execute on netbook:
sudo net/

  INTERFACE=`ip link|grep enp0s29f7|awk -F: '{print $2}'`
  ip link set $INTERFACE up
  ip addr add dev $INTERFACE

This is such a mess because, despite being a ‘static’ interface name, the interface name is NOT static; only its prefix is.

Execute on netbook:
x11vnc -scale 1200x824 -clientdpms

Make sure the Kindle DX is sideways and not on screensaver.

Use netbook to execute on Kindle DX:
/etc/init.d/netwatchd stop

Quickly hide both vnc related windows because they’re scrolling junk. And that’s it; you’re set.

Kill x11vnc to get back out of it. If you pull the cable, you can use ctrl-] to regain control of your telnet session. If you killed the client instead of the server, log back into the kindle and restart powerd to get sleep switch functionality back.

How well it works

Well, it *does* work… But I’m not going for dancing bear prizes here.

It’s hard to read. In lighting where an ebook on the Kindle would be fine, this isn’t just not fine, it’s more eye strain than using the lcd screen. At least, when I don’t have a headache. It’s still more usable than the lcd with a headache. On close inspection, the readability issue seems to be one of font clarity. White areas are white and black areas are black, but most fonts appear to be mostly neither. Still, passable in bright enough light, though bold lines would help.

Block updates help usability on most regions, though they make scrolling text painful. Pagination is definitely the order of the day for any system designed to use eink. Scrolling a text block updates the entire thing painfully. On the other hand, small changes to text in an otherwise unobserved corner of my screen are actually noticed this way because of them briefly flashing black. This means I spot chat alerts, draft saves, etc. much more readily.

Screen delays are neither good nor bad exactly. They lend a familiar but very different dynamic to the interaction. If you’ve ever telneted into a unix server from a green phosphor terminal over a slow modem, you’ll know exactly the dynamic I’m talking about. Or, even more exactly, one of the early monochrome Sun monitors. Great for contemplating single screens of data, annoying if you need to switch back and forth between stuff.

The mouse is barely usable, so keyboard shortcuts are a must. Typoes are likewise hard to catch quickly, so accurate typing matters far more than usual. This makes it critical to have a reliable keyboard, and means that only very reliable applications are usable quickly. Far too many modern apps drop keystrokes at speed, which failures are ordinarily both easily mistaken for human error and easily corrected on the fly.

Default console colors translate into greyscale poorly and inconsistently. This is a source of a lot of the weak grey tones, though scaling also seems to contribute to those. It makes vim syntax highlighting counter-productive. Fortunately, code is also more readable without highlighting than usual. The most natural color scheme is primarily white, which is a reversal of my normal tendencies on an lcd screen, so I had to redo themes and wallpapers. However, this makes bright websites non-annoying without resorting to scripts. In particular, some sites using the modern large fonts on white style become nearly as comfortable as ebooks.

Physically, the setup is bulky and top-heavy. The kindle is best stored in its case so the screen doesn’t get damaged, which adds even more bulk. It’s a lot harder to balance on my lap than the netbook alone is.

The screen is mostly off behind the kindle, but not entirely. Dpms isn’t ignoring key strokes, so it flashes white on occasion. This would be really annoying if it weren’t covered over. There is also no screen blanking of the kindle. Sure, this doesn’t draw power or damage anything, but I still consider it a misfeature. Ghosting is also present on the eink screen, but I haven’t found it anywhere near as troublesome as expected.

Ideas by the bucketload

Since a lot of the frustration is with form factor, top-heaviness etc. I am pondering whether replacing the shells of both devices with a custom case would be feasible. This would be a grand opportunity to steampunkify the entire thing, of course, which is far too tempting, if entirely outside the realm of my hardware modification experience. It’d be a serious case mod, requiring custom wiring and a lot of fiddly bits. But it gives me something pleasant to ponder while chasing the software around in circles.

On the software side, you may have noticed that I switched back to x11vnc from Xvnc upon discovering that there was an option that would let me stretch the screen to the kindle’s resolution. This allows switching screens without having to restart all of my apps or move them to different workspaces, means I don’t have issues with the pointer sliding off the screen, and means I don’t have to go track down ways to increase my font size. It does, however, also mean that I’m not using the full resolution of the kindle, my fonts don’t pixel align with the kindle pixels, and images are stretched more in one direction than in the other.

The lack of pixel alignment is what causes the ambiguous grey fuzziness. It’s an unacceptable downside in the long term. But not having to bring up a whole new workspace was a *huge* advantage… Bugger. Insoluble, for now. Maybe if I could sufficiently script the whole obnoxious… Ok, next problem.

Startup is way WAY too manual. I’d like to just plug the kindle in, hit a key combo on it, and have the whole thing swap over. Can I do this? First, the network would have to enable itself automatically. netctl should allow me to do this. Except, its profiles rely on the static interface names actually being static. Which in this case they aren’t. Bugger. I’d have to either figure out how to make the interface names static, or find another way of hooking a network script to the plug-in action.

Once the network is up properly, it would be nice to use a key combo on the kindle rather than having to telnet in. As my original post referred to, there are directions for setting this up. They boil down to ‘go here, get launchpad, install it’. The vnc viewer already comes with a configuration script for it. However, it also comes with a script for running all the relevant commands automatically from the computer. Which might be even handier, seeing as how that’s the keyboard I expect to already have my hands on and be running any automation from. It’s based on Xvnc, though it certainly doesn’t need to be, and will apparently require me to figure out how to use multiple ssh keys since I’m certainly not going to render my primary key passwordless.

In the process of messing with this, I naturally discovered that the Xvnc and x2vnc setup I’d previously used was no longer functional. Aaaargh. But I will not bang my head on the wall of software all of the time. Hardware daydreams are at least as productive. I’d love to see this thing in a wooden case, with metal fiddly bits and a nice light that could be extended from the top for when I didn’t want to turn the room lights on, the screen set to flip around between the lcd and eink, a sturdier keyboard with comfy custom keys… I can at least daydream.

My code farts rainbows


, ,

Reading Clojure programs quickly drove me up the wall. It’s just as Lots of Irritating and Sadistic Parentheses as its syntactic predecessor. This needed something… something like rainbow colors!

Turns out, vim already has a rainbow mode for Lisp. Try :let g:lisp_rainbow = 1 in a file that’s using Lisp highlighting. Shiny, if a bit inelegant. The syntax highlighter for Clojure, however, uses a far less modular approach than that for Lisp, so it took a while to come up with a working variant. In the end of the first draft, what I produced was this:

syntax region clojureParen0 matchgroup=hlLevel0 start="(\|\[\|{" end=")\|\]\|}" contains=clojureParen1,clojureString,clojureComment,clojureError,clojureConstant,clojureBoolean,clojureSpecial,clojureException,clojureCond,clojureRepeat,clojureDefine,clojureMacro,clojureFunc,clojureVariable,clojureKeyword,clojureCharacter,clojureNumberclojureVarArg,clojureQuote,clojureUnquote,clojureMeta,clojureDeref,clojureAnonArg,clojureDispatch,clojureRegexp
syntax region clojureParen1 contained matchgroup=hlLevel1 start="(\|\[\|{" end=")\|\]\|}" contains=clojureParen2,clojureString,clojureComment,clojureError,clojureConstant,clojureBoolean,clojureSpecial,clojureException,clojureCond,clojureRepeat,clojureDefine,clojureMacro,clojureFunc,clojureVariable,clojureKeyword,clojureCharacter,clojureNumberclojureVarArg,clojureQuote,clojureUnquote,clojureMeta,clojureDeref,clojureAnonArg,clojureDispatch,clojureRegexp
syntax region clojureParen2 contained matchgroup=hlLevel2 start="(\|\[\|{" end=")\|\]\|}" contains=clojureParen0,clojureString,clojureComment,clojureError,clojureConstant,clojureBoolean,clojureSpecial,clojureException,clojureCond,clojureRepeat,clojureDefine,clojureMacro,clojureFunc,clojureVariable,clojureKeyword,clojureCharacter,clojureNumberclojureVarArg,clojureQuote,clojureUnquote,clojureMeta,clojureDeref,clojureAnonArg,clojureDispatch,clojureRegexp
hi def hlLevel0 ctermfg=cyan cterm=bold
hi def hlLevel1 ctermfg=yellow cterm=bold
hi def hlLevel2 ctermfg=magenta cterm=bold

(Save as ~/.vim/after/syntax/clojure.vim to make work.)

It’s dependent on the implementation details of the default Clojure syntax highlighter, which sets off every maintainability alarm in my head simultaneously. Otoh, it works, which is great for a starting point.

Here’s a snippet of code (off of Wikipedia) in the default highlighting:

(defn run [nvecs nitems nthreads niters]
(let [vec-refs (vec (map (comp ref vec)
(partition nitems (range (* nvecs nitems)))))
swap #(let [v1 (rand-int nvecs)
v2 (rand-int nvecs)
i1 (rand-int nitems)
i2 (rand-int nitems)]
(let [temp (nth @(vec-refs v1) i1)]
(alter (vec-refs v1) assoc i1 (nth @(vec-refs v2) i2))
(alter (vec-refs v2) assoc i2 temp))))
report #(do
(prn (map deref vec-refs))
(println "Distinct:"
(count (distinct (apply concat (map deref vec-refs))))))]
(dorun (apply pcalls (repeat nthreads #(dotimes [_ niters] (swap)))))

And again with rainbow parentheses:

(defn run [nvecs nitems nthreads niters]
(let [vec-refs (vec (map (comp ref vec)
(partition nitems (range (* nvecs nitems)))))
swap #(let [v1 (rand-int nvecs)
v2 (rand-int nvecs)
i1 (rand-int nitems)
i2 (rand-int nitems)]
(let [temp (nth @(vec-refs v1) i1)]
(alter (vec-refs v1) assoc i1 (nth @(vec-refs v2) i2))
(alter (vec-refs v2) assoc i2 temp))))
report #(do
(prn (map deref vec-refs))
(println "Distinct:"
(count (distinct (apply concat (map deref vec-refs))))))]
(dorun (apply pcalls (repeat nthreads #(dotimes [_ niters] (swap)))))

I, at least, get lost a lot less this way.

Of bile, villi, redbull and chicken skins


, , , , , , , , , , ,

It’s been anecdotal knowledge among myself and a few wheat intolerant friends for a while now that we needed a couple of extra nutrients that aren’t normally considered essential: Taurine, Glycine (go go gadget chicken skins), and something else we couldn’t put a finger on that Redbull seemed pretty good at providing along with the Taurine. Also, a bit higher fat intake. Sometimes a lot higher; though digesting it can be a bugger all its own. But why? Jokes about being part cat (they’re Taurine dependent obligate carnivores) aside, it seemed rather odd.

Trying to pin down something tangentially related, Chandra and I dug up an incredibly relevant topic: Enterohepatic circulation. This is a circulation wherein material is excreted through the bile ducts into the intestines, then reabsorbed later in the digestive process by the intestinal villi. The same intestinal villi that are mangled by the immune system in people with Celiac type gluten sensitivity. It seems reasonable that, in someone with Celiac, decreased reabsorption would lead directly to increased dietary needs. But of what, exactly?

Bile acids, being the material excreted by the bile ducts, are in large part formed thusly: Cholesterol is oxidized, then bound to a number of different substances. These primarily include Glycine and Taurine, as well as Glucuronic Acid and Sulfate. Glucuronolactone, a derivative of Glucuronic Acid, may well be the Redbull mystery ingredient; though, oddly, I no longer see it on the label. A possible avenue for future supplementation experiments.

Wikipedia gives some rough numbers, none of which match each other coherently, which could be used to estimate the upper bounds on supplementation. According to the bile acid page (which looks a bit better curated than the cholesterol page), 20-30 grams of bile acids are generally produced per day, with an ordinary reabsorption rate of around 90%. That would indicate around 3 grams normally lost to digestive processes per day. The cholesterol page claims for 1 gram loss and 95% reabsorption, while claiming cholesterol production for bile use at around 400mg per day. At any rate, this looks like an extra loss of up to ~27 grams. Call it 30 for fewer math headaches. (Probably less, but again, I’m trying to establish a loose upper bound here.)

Most of the bile acids contain about 4-5 rings worth of material, Glycine and Taurine have about half a ring or so, so loosely gauging molecular weight, I’d guess the bound amino acids contribute about a tenth. That’d give an upper bound of around 27 grams per day of fat* and 3 grams per day of other material spread across Glycine, Taurine and Glucuronic Acid; assuming perfect absorption. Perfect absorption is, of course, a lie. Especially with Celiac. That’s kinda the point in the first place.

*Cholesterol intake bears basically no relation to cholesterol levels; fat intake from which to create cholesterol is the key to increasing potential levels under resource constraint. (Though not actual levels barring resource constraint; those are under the purview of various regulatory mechanisms.)

Fortunately, that was a loose upper bound. While diminished in capacity, the intestinal villi must still be doing their job to some degree or we’d all be in much worse shape. So the loose bound establishes not so much a guideline as a guidescale. Worse intestinal damage? More additional nutrients needed. In roughly those proportions, barring differences in absorption capability; I’m not sure of the absorption pathways on unbound Glycine, Taurine, etc. Guesswork works really well though. I’ve personally pushed Glycine as high as 1 gram per day to noticeable good effect when particularly ill. Taurine losses anecdotally seem a bit lower.

While decreasing damage to the intestinal villi and allowing them to heal as much as they may is, of course, always the goal, realistically they aren’t going to be in tip-top shape basically ever. What consequences would lack of the supplementation outlined here be likely to have when it was actually needed?

Direct symptoms of not reabsorbing bile:


  • No direct information, however cholesterol is used for: Cell membranes (fluidity/permeability as well as general structure), Vitamin D production, and a wide variety of hormones including adrenal and sexual. Given this broad reliance, it would be very strange if these functions didn’t suffer under conditions of excessive loss.
  • Anecdotally, this correlates to an extreme lack of energy (both mental and physical), with muscle tics and other minor nerve discombobulations.


  • Taurine shows up in osmoregulation, calcium signalling, and as a general inhibitory neurotransmitter. Because it’s considered non-essential, there is very little direct information about lack of it in humans. In cats, deficiency is associated with eyesight degradation and a condition called ‘yellow fat’ in which fatty deposits are painful to the touch.
  • Anecdotally, a ‘buzzy’ feeling tied to poor calcium signalling and fat being painful to the touch both can occur in humans.


  • Since Glycine is considered non-essential or contingently essential, there is little direct information on deficiency. As a major component of collagen, any limitations of this amino acid are likely to show up in connective tissues. This includes ligaments and skin. I would personally expect formation of stretch marks (a form of scarring), poor wound healing, and loose joints or skin similar to mild Ehlers-Danlos syndrome. Glycine is also an inhibitory neurotransmitter, so some effects might show up involving that.
  • Anecdotally, insufficient glycine correlates to easy (even spontaneous) bruising, slow wound healing, extremely hypermobile joints, poor scar tissue formation, depression, and bleeding gums.

Glucuronic Acid:

  • The information for this is split crudely between its own article and that for Glucuronolactone. Wikipedia definitely shows its weaknesses here; I ought to dig up some real sources rather than comically cursory summaries that may be entirely inaccurate. However, between these articles, Glucuronic Acid is implicated in connective tissue stability and various forms of toxin elimination.
  • Having not tried it separately, I have no anecdotal points for this chemical.

In all, it’s rather pleasing to have a prospective explanation for such a peculiar constellation of dietary needs. Digging up proper sources to back most of this information (rather than the dreaded Wikipedia, shunned by all good intellectuals) would probably be worth the effort, as would finding better supplementary sources than Redbull, chicken skins, and an occasional fondness for pork rinds. Especially if Redbull keeps messing with their formula.

The ice cream is a lie


Properly, it’s iced cream. Or it would be, if it were proper.

Grams of fat per 4oz of cream: 22

Grams of fat per 4oz of ice cream: 3-7 depending on the brand

Grams of fat per 3oz ice cream bar: 22

One of these things is iced cream (plus egg and other goodies), the other is creamy ice. None of the brands of creamy ice, mind, are labelled low fat. Oh sure, they’re tasty! But if I were trying for low fat, that’s what I would have bought.

console.log() is not the time machine I was looking for


, , , ,

Common wisdom and StackOverflow would have it that the right, simple way to inspect an object when debugging JavaScript is to use console.log() to directly output it to the debugging console, where it can be expanded and inspected at leisure. The natural assumption of a person accustomed to logging facilities is that what is inspected in the log will be what the logged object was at the time when it was logged. Alas, for console.log(), this is not always so. I submit this simple example for your befuddlement:

var a = new Array(3);
a[0] = new Array(3);
console.log(a, a[0] ? a[0] : null, (a[0] && a[0][0]) ? a[0][0] : null);
a[0][0] = 1;
console.log(a, a[0] ? a[0] : null, (a[0] && a[0][0]) ? a[0][0] : null);

Pop that in a file suitable for accessing it in either Chrome or Firefox, then bring up the debugging console. Expand the first entry on the first line. Data not entered until after that line is visible. As reported by Chrome:


For being a stateful deep object inspector, this is completely useless.

What’s going on here? My best guess is as follows: Objects are generally passed around by reference. A reference to the object is taken up by console.log, and thus passed on to the debugging console. This holds the object open from a memory management standpoint. When the object is expanded in the debugging console, the object inspector reads whatever is currently in, ie. was last assigned to, that object. Chrome is slightly clever and recognizes simple arrays, so the behavior only reveals itself when they are nested. Firefox takes the more consistent, if even less usable, approach of having this behavior for all objects that would be ordinarily passed by reference.

Some of the alternate solutions presented in the above linked StackOverflow post may be useful for getting around this quirk. The most important part, however, is recognizing that it exists. I went around in the most convoluted circles trying to figure out why my arrays didn’t appear to be initializing before isolating it to unintuitive console.log() behavior.

Addendum: JSON.stringify(a) is serviceable for combination with console.log(). Likewise JSON.parse(JSON.stringify(a)) where a deep copy is needed. It sounds awful, but it works and is purportedly efficient enough for most uses.

A quick guide to vastly lower cellphone bills


, , , ,

Presuming that you already have an android device… (And if not, they’re relatively inexpensive to get used, this doesn’t require a particularly advanced one. Anything back to a Droid 1 will work if switched over to CyanogenMod to reduce clutter.) Also presuming you’ve wifi access at home and at most locations where you wish to receive calls.

  1. If you’re in a contract, calculate whether the early exit fee is less than the amount you’d pay over the remainder of the contract. It probably is, in which case bite the bullet and do it.
  2. Switch over to a pay-as-you-go plan with no contract. Just about every major provider now offers these. The best ones are structured so that you pay only on days you use, have infinite usage on those days for a set rate of a couple dollars total, and have no ongoing fees. I found T-Mobile’s to be the best deal when I did this.
    • Unlimited usage on a given day is important, because it sets a fixed limit on how fast the pre-paid account can drain. If you’ve got at least $14 in there and $2/day max, you know it can be used every day for a week as much as needed before the account is drained. When things come up and a phone is suddenly essential, it’s generally needed a LOT for that short timespan.
  3. Get a Google Voice account/number. It’s a good idea to make this a separate number even if your provider gives you the option of combining them, that way if you ever have to switch providers there is no hassle. Yes, number porting works, but if you number port from a provider which combines directly with Google Voice to one that doesn’t, the number can no longer be your Google Voice number.
    • From here on out, the Google Voice number becomes your main number. Do fix the ugly voicemail greeting. Everything will go through this number – calls and texts both.
  4. Install GrooveIP Lite on the android phone (it’s free). This allows you to dial out over wifi. With a little configuration, it will also allow incoming calls over wifi.
  5. Install also the Google Voice app on the phone. This will do outbound text over wifi, as well as providing a good interface to your general inbox showing texts, voicemails, etc.
  6. Configure Google Voice.
    • Set default phones on the general settings page to only ring ‘google chat’. This actually means to ring all VoIP clients. GrooveIP is a VoIP client and will be duly rung by this setting, so this effectively enables incoming phonecalls over wifi from any origin. This will keep wrong numbers and other confusions from coming in over the cell network. Alternatively, if none of the boxes are checked, unknown callers will go straight to voicemail.
    • Under ‘Groups & Circles’, set All Contacts to ring through to both ‘google chat’ and your cellphone. This way the choice of where known callers go depends on which network you’re active on. If you’re in airplane mode but on wifi, wifi gets the call; if you’re off wifi but on the cell network, the call comes through there. This is of course but one possible setup, any arrangement of groups and defined ringthrough characteristics is possible.
    • Consider carefully whether to turn on text messages to the cellphone. If this is left off, they land in the Google Voice inbox, to be picked up by the app next time you’re on wifi. (Or can be set to be delivered to email.) If set to go to the cellphone, they will behave more or less as SMS normally does – including arriving later if you were off the cell network, thereby incurring fees anyway.

There you have it. Running the system is just a matter if checking Google Voice notifications (the app helps a lot with this), using the right apps when dialing/texting out over wifi, and remembering which network to be on when. An app such as Tasker can do a lot to help with making sure you’re on the networks you mean to be on if you’ve got a regular schedule. Well managed, this system should only cost a couple of dollars for each day it’s used outside of wifi networks, while consistently providing the full capabilities of a modern smartphone any time they’re needed as well as the added benefits of Google Voice in terms of call blocking, voicemail transcripts, etc.

Addendum: Most of this is entirely feasible without an android device. The result is a little less shiny, but with a microphone on a computer a VoIP client there can take calls similarly. I have heard that Google Hangouts works for this, but not yet tested it.

The kind of optimization not to be delayed


, , ,

Mind full of cries to eschew premature optimization, with little objective beyond getting my feet wet by finishing something already, I dove into a relatively straight forward project. Granted, the naive algorithms looked nasty, being a double layer of recursions — one to cascade down the DOM tree, one to comprehend each layer of it along the way — but the echoes cried in my head.

Premature optimization!

The perfect is the enemy of the good!

Naive implementations maximize readability.

You’re using unit testing, you can always refactor.

So naively I ploughed ahead, and with success. The code was writ, the unit tests passed, the relatively small cases handled with sufficient results. Then I threw a real website at it, and watched Chrome bog down with the CPU at 97% and no end in sight. I say no end in sight because, being a delightfully responsive operating system, my Linux box didn’t even blink; I only noticed a couple hours later because it was getting hot. (Next project: Fix the alert levels on the load monitor.)

After some frustration with trying to squeeze JavaScript into a reasonable debugging infrastructure so I could juice it for some profiling data, I realized there was a far simpler approach than whining about the computer not doing the work for me. I stopped and calculated the time complexity.

Now, worst case for the flat part was two recursive calls at each step, giving 2n. Each layer can at most cascade down each of its elements, giving nm where n is the number of elements in a layer and m is the number of layers. Granted, these are both worst case, but that doesn’t change the overall order; just the irrelevant constants. Likewise, they can’t both be worst case at once, being as they apply to opposing subsets of elements, but again the result of this is an irrelevant constant term. So, the overall time complexity is:


This is, quite frankly, a disaster. At a rough order of magnitude, if n,m≈10 then this is around 1013 operations; with n,m≈100 it’s more like 101030. For a sense of scale, at one operation per nanosecond, that would be vastly longer than the age of the universe.

This is not a case that can be fixed with tweaking details. This is not a case for profiling segments here or there. It’s simply one where all effort expended in implementing that particular algorithm was wasted, because it was never going to get off the ground. This is the sort of analysis that, driven by mathematics rather than guesswork or dogma, should be done and done early; before time and energy are wasted.

Granted, in this case I learned a lot along the way, and it is a learning project, so it’s not entirely wasted. Little ever is to the industrious mind. But that does not make it an optimal use of time. Still, an expensive mistake is a memorable one. You can bet I’ll be checking the complexity of algorithms before I invest hours in implementing and testing them in the future.

Anonymized AJAX loads


, ,

Suppose you wanted to load a URL through AJAX, without sending a slew of cookies with the request. A little like the anonymized modes in web browsers… Some digging (Thanks, Semi!) turned up the Cookie Monster solution, which looks tolerable if complicated. It’s even not as Firefox specific as it initially seems. However, upon actually looking at the W3 spec for XMLHttpRequest, a far easier method presents itself:


Preliminary testing, at least in Chrome, shows this to be supported in the wild.

There does, however, seem to be one caveat: If the page loaded this way does a cascade of further page loads, those will be sent cookies like normal, ie. the anonymization does not propagate.