Another way of thinking about global variables

(This is a computer programming topic.)

The problem

So “global variables” are bad, right?

Well, yes. They suffer from initialisation order problems: if x is a global and f is a function which uses it, if f gets called before x is given an initial value, then something is going to go wrong. More generally, if one function uses a value while another changes it something may go wrong. This can be a problem even without multiple threads.
One “alternative” is to use “instance” variables which either store a value or are empty. A special “access” function (often called instance in Java) can return the stored variable or stop the program with a suitable message when the variable has not yet been initialised. But this adds run-time overhead and doesn’t solve the more general problem of simultaneous access.A solution which solves many of these problems is to put your variables in a caller function (or an object held by a caller), then pass a reference to the variable into each function using it. This solves the “initialisation” problem since one cannot take the reference of a variable which has not yet been created — though this still allows errors if the variable isn’t initialised as it is created. But it doesn’t solve the simultaneous access problem.Add in lifetime analysis, as the Rust compiler does, and the simultaneous access problem is solved too.But is this a panacea? No. In smaller code bases this is fine, but in large projects there can be many variables which need to get passed about, sometimes being passed through several functions just to reach the place they are needed. This can result in function signatures being of unwieldy length just to pass all the needed references in, as well as increase code churn, since any time a variable needs to be added which is used by f which is called by g which is called by, etc., some callee F, then not in addition to f the signature of g, etc., also need to be updated. And not just the signature, also the call.Further, this promotes a hierarchy of data ownership. In some cases this may be perfectly appropriate, e.g. (to use a common computer science example) a car has four wheels which each have one tire, so it may be perfectly appropriate to have some structure like this (to use Rust syntax):

struct Wheel {
traction: f32,
tread_depth: f32,
}
struct Car {
wheels: [Wheel; 4],
steering: SteeringWheel,
}
let myCar = Car::new();

Now, say you put six cars in a race. Before the race starts, five commentators each give their opinion on each car. The logical way to store those comments, according to the data hierarchy, is to add a list of comments within each Car struct, or, may, a list of car comments within each commentator’s memory. But do you really want lists embedded in lists? A better option might be to have a matrix of comments (a table, with one axis correlated with cars and the other with commentators).Adding a table like this, however, has its own problems. It goes against the grain of object oriented design and the data hierarchy. Why is this bad? Well, for one thing, there are more variables to pass around. If a function wants to list each car alongside its comments, it needs to be passed not just the cars but the comments too. Etc. It can also make it hard to reason about how variables are used (e.g. where they are modified).Okay, enough cheesy examples. Some real ones. I have had to choose between the dangers of global variables and the evil complexity of deep hierarchies many times while working on the C++ simulator OpenMalaria. One example is interventions affecting mosquitoes (which, for those who know nothing about malaria, are the vectors which transmit the disease). The simulator has two transmission models, a simplistic “non-vector” model and the more detailed “vector” model which simulates populations of mosquitoes of several species of mosquito. An intervention affecting mosquitoes, for example larviciding (treating pools of stagnant water where mosquitoes breed in order to kill off their larvae), has a parameter describing its effectiveness against each type of mosquito as well as some parameter describing what portion of breeding sites are currently affected. The former is dependent on the mosquito species, so hierarchically belongs within the “species” object within the “vector” object. The “portion affected” parameter hierarchically does not belong in the “species” object, but if it is not it must be passed into the function which calculates emergence of new mosquitoes from breeding sites. If it is, the function deploying larvicide must have access to each “species” object to update it, which in an object-oriented program means that the “species” type should have a function to deploy the intervention, and the “vector” type should have a function to call that. One intervention on its own is not a problem, but there are several interventions, some with effects within multiple modules of the code, and many other things going on at the same time. This leads to bloated interfaces (e.g. the interface (abstract class) for the transmission model), many parameters needing to be passed into functions, and data associated with one thing (in this case, the larviciding intervention) being split between several different modules.A solution

Global variables would make all this a lot easier, however they have their own problems, highlighted above.

However, if the problem is not the global variables themselves, but the way in which they are accessed, then why can’t we just control the access?

Let global variables be created normally, but alongside each create a token (an access key). Let any function using that variable require the associated token (in read-only or read-write form); this could be written explicitly in the function signature but for the most part the compiler could work this out automatically. Let any function calling a function requiring a particular token require that token itself, and within the function guard usage of the token as Rust currently guards lifetimes: if a token is in use by a function, it is locked and cannot be used by another function (or the variable accessed). These token signatures should bubble up all the way to the program’s main function.

Additionally, any function assigning to a variable before doing anything else with the variable would be marked as initialising that token. It would be required that the main function initialise every token which is used.

Similarly, any function destroying a global variable without re-assigning it would be marked as a de-initialiser. It should be required that main is also a de-initialiser for every token used, and that a de-initialised token cannot be used without being initialised again. To allow re-assignment of globals it may be required that new-assignment vs re-assignment be explicitly differentiated, or this could be automatically checked at run-time.

It is worth noting that for all the complexity of this proposal, it still only ensures correct initialisation and that certain dodgy practices may be prevented (e.g. I am not quite sure how passing references to functions requiring a token would work), as well as preventing simultaneous access from multiple threads.

It may be useful additionally to allow “locking” a token with a key (e.g. a private member of a struct) by usage of special function signatures locking/unlocking with the given key.

Conclusion

Initialisation of global variables can be enforced before usage, as can correct deinitialisation. Certain types of simultaneous access, including from multiple threads and where explicitly locked by one user, can be prevented. And this can be done without run-time overhead.

However, function signatures may increase in complexity quite substantially in full form, but other than for API boundaries and maybe a few special cases this can all be done implicitly within the compiler.

Cell, RefCell

As I typed this, I was not really sure if the cost-benefit ratio of adding tokens to control access to global variables was worthwhile. An existing solution in Rust would be to use Cell and RefCell (see Manish Goregaokar’s blog post on this). This, combined with Option to control initialisation would be a good alternative for mutable global state, adding only a little run-time cost.

Posted in Uncategorized | Leave a comment

Yoga 2 Pro + Heisenbug

Had my new laptop about two weeks, time to write up my thoughts. Lenovo’s new Yoga 2 Pro is a tablet wanting to be a laptop — or a laptop wanting to be a tablet — or something in between. After trying out a few modes I mostly use it as a laptop with a touchscreen or sometimes as a tablet for reading (or using mouse-only programs). The machine’s a bit oversized for an e-reader, but awesome for youtube.

Anyway, on to the interesting stuff. Does it run well under Linux? And is the result usable?

I can only think of 3 problems I had installing Linux. The first was getting past Windows. Since the device boots using UEFI, there’s no way of getting to the “BIOS” screen or booting another device without going through the OS, which means accepting the licence agreements, booting Windows, and clicking the shutdown button while holding shift (took me a while to figure that one out). The second was another stupid bit of engineering: The Fedora 20 “Heisenbug” installer (at least the KDE Beta one) doesn’t include gparted (or KDE’s equivalent partition manager). It does have some tool to “reclaim space” from other partitions in the installer, but it’s not clear what exactly it will do. Luckily GParted’s live disk is really easy to use. The third is to make the backlight and wifi work. You can find hints elsewhere; it’s basically a case of using the `acpi_backlight=vendor` kernel option and blacklisting the “ideapad_laptop” module.

So, after that, what works? On the hardware side, almost everything. There’s no physical radio kill switch and the function key (“F7”) doesn’t do anything, but you can use rfkill. The accelerometer doesn’t work for rotating the screen, but you can still do it manually (use this script). Battery life is okay at 9-10W draw on light usage (that’s about 5 hours) and 15-25W with higher CPU usage; hopefully it will be improved by the 3.13 kernel. Most annoyingly is that around 2 out of 3 times when closing the lid the machine tries to go into standby mode but immediately wakes up again (fortunately the worst case is having to open and close the lid a few times until it succeeds).

What about the 3200×1800 pixel screen? This about sums it up: there’s a few glitches, but finally I can have a decent amount of easily readable content on a laptop! Seriously, if your work involves quite a lot of text and working on a small-enough-to-be-mobile screen, there’s no turning back. Probably the slightly less dense “retina” screens on the Mac Book Pros are much the same in terms of usability, but the difference compared to a low-resolution screen (by which I mean anything up to and including 1080p) is night-and-day. The screen fits two windows side-by-side each over 100 characters of text wide — and text that’s easy to read at a comfortable distance of 60 cm or so (that makes it smaller than font normally is on a ~100 DPI screen, but since it’s far smoother it’s probably actually easier to read), with all the usual borders and controls around the edges (webpages still frequently need more than half the screen, but most other applications don’t).

So does this work? On Linux? The problem is that unscaled applications use fonts that are painfully small to read and icons that are a job to click on. But yes, it does, at least using the KDE desktop, and accepting some glitches. First thing you need to do: increase the desktop font size (you can do this in KDE’s System Settings, under Application Appearance → Fonts → Force fonts DPI — I use 180, although the screen’s true DPI is 275). Next, increase icon sizes massively (again, under Application Appearance, then Icons → Advanced) and the window border/button sizes (Workspace Appearance → Window Decorations → Configure Decoration). Log out and in again, and your desktop should be much more usable. Many applications like KDevelop are immediately usable while others like Dolphin and Digikam need only trivial adjustments. Still problematic are plasma and the system tray icons, but it looks like something is happening here.

What’s broken, however, is HTML. Well, not entirely; most websites use fixed font sizes which when scaled up (use NoSquint in Firefox) look fine. Some emails are the same, and again “zooming in” on them is pretty easy. What’s worse is that some fonts get scaled up way too much — “big” fonts end up enormous for some reason and I’ve had a few emails I’ve had to zoom _away_ from and found one website with an irritating mix of huge and tiny fonts. The solution seems to be to use fixed (pixel) sizes for fonts and “zoom in” in the browser/viewer, but this confirms my theory that HTML is rather tacky. The necessary re-scaling of graphics and resulting pixelation is much less important than getting the fonts right in my opinion.

Okay, lets wrap up by going back to the laptop. What’s the keyboard like? The keys are fine to press with good tactile feedback; for prolonged use it’s not ideal however since the short strokes result in high forces on the fingers. Even the layout is reasonably good; the short right-hand shift key is slightly weird and I miss not having Home/End next to the arrows like on my TypeMatrix, but those keys aren’t far away — the most annoying thing (being a long-time ThinkPad user) is the swapped Ctrl and Fn keys. The touchpad is reasonably good, though too small to give accuracy over the whole screen without requiring swiping several times to cross the screen, and having to “click” it is definitely not as good as having dedicated buttons. This is where the touchscreen compensates a surprising amount: I frequently find it easier and faster to click on buttons, drag windows about and often even select text by pointing my finger at the screen than using the traditional methods. I didn’t think a touchscreen would be useful in Linux. I was wrong. (Here’s some hints on getting more out of the touchscreen, though sadly multitouch support is not ready yet.)

What else? Oh, yes, the touchpad doesn’t get disabled when in tablet mode (can be annoying if you balance the machine on your knee). That’s about it for now, except to say that programming while riding Swiss trains has never been more productive!

Posted in Uncategorized | Comments Off on Yoga 2 Pro + Heisenbug

HouseBus

What is it? A mobile home? No, a house-wide communications and power grid.

It doesn’t exist yet (as far as I’m aware), but it should: a wiring system combining power and communications wires. To reduce interference and improve efficiency, the power supply should be high voltage DC (e.g. 400V). To keep wiring simple and upgradeable, no electronics (hubs or routers) should be needed at sockets or junctions and the number of data wires should be kept low (e.g. 2 or 4).

All electronics should be in the devices plugged in. Because the data wires would be used as a shared bus, several wireless technologies would be relevant, for example contention management and encryption. And because the system could be useful for both large numbers of low-bandwidth devices and a few high-bandwidth devices, it may make sense to divide the data part into two or more sub-systems, allowing usage of old/cheap devices on one sub-system and more frequently upgraded/expensive high-bandwidth devices on another.

Why would it be useful? Well, the same wiring system could be used everywhere in a house, allowing any devices to talk to each other (without worrying about wireless signal strength or extra costs of wireless). Lights could be connected into a communications system, turning themselves on and off at the command given by a switch, mobile or server, with very low extra cost in the light-bulbs. Fire alarms could easily communicate with one another, as could any other sensors you might want around the house (baby monitor, ambient light sensor, security system — you name it).

Assuming high-enough bandwidth communications systems could use the same network (which seems likely, considering the existence of ethernet-over-power devices), the exact same system could be used for home networking — possibly not for video streaming but very likely for internet connections, music streaming and the like. Yay for wired networking without DIY network cables everywhere!

Is that not a system worth asking for? Especially since replacing existing household wiring with DC should increase efficiency (and hopefully finally allow the world to standardise the voltages and outlet sockets so we don’t have to carry adapters every time we travel)?

The other thing which is needed is a standardisation of laptop power bricks, but that’s a slightly different story.

Posted in Uncategorized | Leave a comment

TypeMatrix, a year or so on

It’s almost exactly a year since I bought a TypeMatrix keyboard, and I think it’s time for a review.

The good? The key travel is good (if ending with a jarring feeling and, especially with the latex skin, a little stiff). The keyboard layout is very good (not quite optimal, but a massive improvement on the usual typewriter-derived layout). And it has built-in Colemak support, which has saved me a few times when using a computer without the Colemak layout set up.

The bad: captive cable. Oh, and it recently broke. I was cleaning some keycaps with a slightly damp cloth, and a very small amount of water (and cif) got onto one of the traces, resulting in several non-working keys. The TypeMatrix support people offered me a half-price replacement, but no repair or free replacement since I’m responsible for the water damage.

Dismantling the keyboard, the damage on the electrical traces was obvious, but repairing them hasn’t turned out easy (I may buy a circuit pen off ebay; aluminium foil is conductive but doesn’t make a good contact with the traces). What I can see, having dismantled the thing, is why TypeMatrix don’t offer to repair keyboards: to get the thing apart I had to break off a lot of plastic lugs, and if I ever get the circuit traces repaired I’m still going to have a job putting the keyboard back together properly.

Hence, I’m in two minds about the TypeMatrix. It’s a lot nicer to use than a standard keyboard, but it’s not well constructed (if you ever have to take it apart).

As a quick comparison, I’m typing now on a Logitech G11 keyboard (fairly standard full-size layout except for an extra block of keys on the left, with cheap membrane keys with a lot of travel and cushy/squashy feel). The keys themselves are a little worse; if anything I think the force required to press them is less, but they have long travel and more friction if you don’t press them exactly square (which is common for me). In terms of layout, though, my first thoughts were “wow, I can’t remember ever having used such a *bad* layout”! (To tell the truth, I have been using my ThinkPad’s keyboard too recently, and that doesn’t seem so bad, perhaps because the key tops are larger and travel shorter.) The row one up from the home row is better oriented for the right hand at the cost of being worse oriented for the left; nothing serious. The number row above are almost one whole key to the left of where I’m used to them being. Wierd. But worst, the bottom row is positioned such that I often can’t work out which finger is the best one to use (especially for ‘z’ and ‘x’).

Now, I get the feeling most people aren’t nearly as bothered by bad keyboards as I am, but, if you frequently use a computer at a desk and ever find typing uncomfortable, I’d recommend trying a keyboard with keys arranged in straight vertical columns.

As to what, though, finding recommendations is unfortunately rather hard. Jarred Walton at AnandTech has recently reviewed a few, most recently the ErgoDox. All three he reviewed look quite good to use, but all three cost close to $300 (by the time you include shipping to Europe at least), and the last one requires some DIY (at least, to get the most out of it). Besides those and the TypeMatrix keyboards, the only others I’m aware of are ones intended for point-of-sale (i.e. cashiers). Why aren’t there more options? Seriously?

Posted in Uncategorized | Leave a comment

Java: you could or you should?

What’s a piece of advice said differently?

If someone asks you how to boil potatoes, there’s a simple answer: put them in a pan, immerse them in water, and put the pan on the heat. But if you were asked how to cook spuds, what would you say? Just boil them? Or would you also mention fried potatoes, roast potatoes, jacket potatoes and maybe microwaves? Or even suggest a shepherd’s pie?

There’s a serious point here. It has several times occurred to me that JRE would probably contain better quality libraries if it was merely a collection of high-quality community contributed largely independent libraries built on a small base (much like Boost is to the STL and other core C++ libraries) than as the kitchen-sink monolith it currently is. Granted, not everything would be hunky-dory, but if enough people think one thing is broken then they just replace it. The new library might not immediately make it onto all target devices, but hey, if it’s good it could get there somewhere down the line.

But it’s not just the JRE. Java has a lot of libraries, but several big frameworks too. What’s a framework? A big mess of code doing many different things with lots of interdependencies and lots of pressure to do things our way, the way I understand it. How dynamic can that be? And more to the point, is it fun spending hours learning how to use the thing when you only need it for one small job?

And then there’s IDEs. Java’s got some amazingly capable editors, that’s got to be said. But why does practically everyone say “use Eclipse”? Sure, in terms of features it’s untouchable. But it’s UI demands a redesign probably more than any other app I’ve ever seen, and, well, do I need to say anything else?

Okay, rant over. (And no, this is not saying “Java sucks”. It doesn’t, it just seems to have a very high tolerance for poor design in massively used components.)

Posted in Uncategorized | Tagged , , , | Leave a comment

Why division remainder and modulus are not the same

It’s simple really. What’s (4-6) mod 12? There’s two ways of thinking about this: the right way and the wrong way.

One is modular arithmatic. If you don’t know what that is, think of a clock: if it’s 5 o’clock, one hour later will be 6 o’clock. Ten hours after this will be 4 o’clock, not 16 o’clock (lets stick with a 12-hour clock, OK?). So what was the time 6 hours before 4 o’clock? 4 – 6 = 10 mod 12 — we can’t represent negative numbers on a 12-hour clock, so we need to add 12 until we get a positive number. “5 mod 12” is just a mathematical way of saying we don’t know how many days in the future (or past) and we don’t know whether it was day or night, but we do know it was 5 o’clock.

So what’s the other way of thinking about -2 mod 12? Well, notice that for positive numbers, x mod y is the same as the remainder of x/y, for example 20/12 is 1 remainder 8 — if you’re a programmer, you might write this as 20%12. Now you see what I’m getting at. And it’s not too hard to see why this might be a problem either. Say you calculate the hour on a clock, and store the positions to draw the hand in an array of length 12 — you need an index in the range [0,11], so (4-6)%12 is not much good if it returns -2 (or worse, in C90 and C++03, there’s no guarantee what the % operation will yield). It’s nice to see that both perl and python interpret “-2 % 12” as “(-2) mod 12” (which I think is far more useful than “(-2) remainder 12”), although it’s all a bit confusing since in Java and C♯ and usually (AFAIAA) in C++ “-2 % 12” will result in -2 (see wikipedia for a list).

So why am I pointing this out? Because I’ve seen so many programming languages implement “x % y” as a remainder operator and call it a “mod” operator. And it’s just not. If you right “day_of_week[day % 7]” you need to be careful that “day” is not negative, or your program will probably seg-fault or do something funny (or throw an exception in safe languages). If you actually used modular arithmatic, on the other hand, it would handle negative days just fine. And yet I’ve seen this kind of code so often, and been stung several times myself when I thought the “day” was non-negative, that in the end I just decided to use “mod” even though it’s usually slightly slower.

Posted in Uncategorized | Leave a comment

how autocomplete should work

Auto-completion (of words previously typed, member function suggestions, etc.) is both useful and annoying. It is useful because it saves typing. It is annoying because it visually hides other parts of the document, distracts, and blocks keys (worst when you’re not looking at the screen and the auto-completion means or an arrow key doesn’t do what you expect).

Use dedicated keys. Making all keys like arrows, space, enter and tab work as usual even when suggestions are shown would remove some of the annoyance. Keyboards have a bunch of function keys; use those! E.g. F1/F2 (or whatever) to activate the previous/next suggestion and escape or any keyboard input closing the completions box (finalising selection). Since using F1/F2 or whatever isn’t obvious, just write something like “F1: previous, F2: next” at the bottom of the pop-up box.

Make the suggestions box transparent. Not being able to see what’s behind the suggestions box can be annoying, so make that possible (e.g. 20% transparency of pop-up box’s background).

Yes, just a pet peeve, but making those things less annoying really wouldn’t be so difficult!

Posted in Uncategorized | Leave a comment

Update on using the TypeMatrix keyboard

It’s been two months now since the last post, and most of what I said before is still relevent. What has changed is:

  • I’ve got used to using it with the silicon cover. Think I even prefer the feel using the cover now, since it slightly cushions the fingers. That said, I can’t say I really prefer the TypeMatrix keys over my ThinkPad’s keys (nor the other way around).
  • Switching between the layouts of the ThinkPad and TypeMatrix keyboards is less problematic, though I still occaisionally mis-hit keys.

More significantly, I’ve tried using the TypeMatrix while gaming. Here it doesn’t do so well, for two reasons: the layout, and the Alt+Tab button. In games you really need to be able to hit the right keys quickly and without taking your eyes off the screen, so not being able to feel a gap between the number row and the F-keys, or between the bottom letter row and the play/menu/app-switch buttons doesn’t help. And that app-switch button: if you’re playing a competitive game on Windows, the last thing you want to do is accidentally hit Alt+Tab. And that button is right between Alt and C… personally my feeling is that we could do without the Alt+Tab button completely.

Posted in Uncategorized | Leave a comment

TypeMatrix

I’ve recently bought a TypeMatrix keyboard. Here’s my thoughts.
QWERTY model

There’s two things to comment on really: the physical construction and feel, and the new layout. I’ll start with the former.

Key travel is good, a little harder to press than my Thinkpad keyboard but with better feel. It’s short with relatively smooth pressure feedback; I’ve had to get used to pressing the keys slightly harder, but otherwise I like it. I use the silicon skin; without this, keys require a lighter press and feel like they bottom out more suddenly — a more sudden shock to the finger than with the Thinkpad keyboard, but more precise too. I couldn’t say which of these is best for the hands — to be honest I find all three variants perfectly good enough, with the minor down-side to the TypeMatrix with skin being that I tend to press the keys slightly harder than absolutely necessary to make sure letters get pressed in the correct order.

The physical construction of the device — it feels high quality, if a bit heavy. Size is small, although it’s not especially thin. Some people reportedly carry the keyboard with them but I certainly prefer leaving it on my desk due to weight and the captive cable. It has no extra ports, switches, etc., just the keys on the front and one protruding cable.

Angling the two halves out in a fan might be slightly more ergonomic, but I don’t think it would actually be significant — ergonomically I find the keyboard very good (except that the right-shift key requires a long reach with the pinky).

Moving on to the layout, the most striking thing is that keys are vertically aligned in a grid. I’ve only been using the keyboard 1-2 weeks and am still getting used to the layout so I can only talk about initial impressions, which are that this arrangement of keys feels much more natural and works well with a little adjustment (I am typing this article on the keyboard, and at my normal typing speed). Dropping old habits (or switching back to a “normal” keyboard) takes a little effort though; in particular you have to reach for the keys on the lower row (ZXC…) quite differently, and reaching to where you expect the B key to be tends to result in pressing both B and Enter simultaneously.

Another thing that takes some getting used to is the large grid of keys on the right hand side with little about them you can feel to tell where your fingers are. There’s the usual tactile lump on the index finger keys (F and J on QWERTY) and another on the down arrow key, but getting used to where the arrow keys are and in particular the right control key in relation to the arrow keys takes a bit of getting used to. (Note: I use the silicon cover. Without this, keys feel crisper and the tactile lumps are easier to feel, which does improve the situation.)

Other changes are easier. The tall shift keys and central enter and backspace keys do feel strange at first but aren’t actually hard to get used to. I’ve actually hit the shift keys several times intending to hit either Enter or Capslock (which I use as backspace); this is not so bad since it results in nothing happening instead of what you expected (on the other hand, I’ve hit enter unintentionally a few times, submitting forms with incomplete input).

Other things: the home/end keys by the arrows are easy to get used to. The two keys on the bottom-right corner function as page up/down normally and back/forward when ‘fn’ is pressed, which also works well (though I’d have preferred not to have to use ‘fn’ for back/forward). The three new cut/copy/paste keys (used with ‘fn’) function perfectly (various linux software), but just seem redundant (and are shifted to the left compared to what you’d expect). These keys (without ‘fn’) also work fine, but seem a bit strange (the app-switch key is just redundant as is the desktop key, “right click” is not where you’d expect it and the play key… I sometimes press accidentally).

The number-pad area works fine, but I pretty-much never use those so won’t comment further.

Across the top there are ‘eject’, ‘power’, ‘sleep’ and ‘wake’ keys. For me, ‘power’ shuts-down the computer (which I didn’t want it to, so disabled it), ‘eject’ and ‘wake’ do nothing, and ‘sleep’ works correctly. I’d have preferred a ‘lock screen’/’screensaver’ key.
There are also calculator, mail and browser keys on the right edge, which are useless to me. I might remap them as forward/back or alternate media controls or something.

F-keys
: if you use them a lot, you might find the TypeMatrix layout a little annoying. They’re separated between F5 and F6 instead of the usual three groups of four, and — as with a lot of special keys on the TypeMatrix — there’s not much tactile feedback to tell you which buttons are which. This is another area where I prefer the (2010/11) ThinkPad layout.

Overall, I like this thing a lot and think it’s well worth the money if like me you spend a large portion of your day using a keyboard. Not having each row of keys shifted by some strange amount makes the layout feel so much more natural — easier on the fingers and easier to remember. I don’t understand why virtually no-one else makes keyboards without the stupid shifted rows (it’s awkward switching between the two to be sure, but worth it IMHO).

Making the keyboard so compact seems unnecessary in my opinion, or rather, some of the layout on the right-hand-side seems a bit strange. I prefer the Thinkpad keyboard for placement of the arrow keys and probably also the shift key. The stated reason for the compact size is so that the mouse can be placed closer to the right edge; I actually find it more ergonomic having my mouse below the keyboard (so it’s just a small bend of the elbow to reach it).

What I’d like to see:

  • A cheaper version, probably based on standard laptop-style keys. Not because the $110 version isn’t worth the price if you’re working day-in, day-out in front of a keyboard, but because a cheaper version would be much more affordable for a second keyboard left at home and to recommend to friends. There are enough people interested in alternative keyboards because of RSI or ease of learning, but unless such models can compete with the dirt-cheap keyboards sold in electronics stores everywhere they don’t stand a chance with most people.
  • A laptop-integration version (specifically, one I can put in my thinkpad).
  • Better separation of the arrow keys (really clear tactile feedback is good).
  • An integrated USB hub. So useful.
Posted in Uncategorized | 3 Comments

Reading is eating

I just stumbled across a bit of food for thought, and think I did (my nice little jog-in-the-rain turned into exercise of a different kind). Have a bite. Go on, eat the whole thing while you’re at it — it’s only 8 minutes. Then it might be an idea to chew-the-cud a bit before we go on — heavy lifting required.

Fed, rested and primed for action now? Good. Now lets get the obvious conclusion out of the way.

I shouldn’t overeat. Did I say that? Come on, not that one. After all, after are stomachs are full eating gets painful anyway. I expect you know the feeling, after having read for a few hours, when your mind blanks out all but the page in front of you, that your vision starts to blur, concentrating on what your reading any more becomes difficult, and it dawns on you, slowely at first, but more and more insistently I need to get up and do something. No, I expect you were familiar with that before you even stumbled across this article.

No, my first conclusion was I may not have a problem with eating and exercising out food, but I sure have put on a bit of cognitive fat, so-to-speak. There’s a few points here:

  1. There’s no point consuming information if you don’t use it. Of course, there’s no point going on a starvation diet either, besides the fact that we need sustenance to grow. But reading something because it tastes nice isn’t a good idea either, nor just reading something because my teacher tells me I need to know this. 
  2. When you’ve got a job to do, concentrate on the job and not the food to fuel it. Sure, you shouldn’t ignore whatever information you’ll need, the same as it’s not nice to come home after an exhausting day’s work and discover you need to cook supper too. But thinking I need to know the theory in this textbook, so I should start by reading the entire book is not the right way to get something done either.

I hope you already knew that.

I did.

In theory at least.

So, what’s the other conclusion I drew?

Well, consider this. You come up with some cool idea, so you write it down. You don’t trust it to your memory, because you know if you do that you’ll forget at least three quarters of the good bits. (I certainly do.) Job done. You think of something else. You right that down too.

A week goes by. A month. Soon you’ve got more pages of notes than you can organise, and you’ve already discovered by now that you’ve written the same idea down twice, more than once. If things go on like this, soon you’ll have a pile of disorganised, highly-redundant notes, and any time you want to find some note in particular you’re going to have a huge pain. So what do you do?

Thankfully you wrote your notes on a computer so you can easily rename them and move them around. So you categorise them. You’ve got quite a few notes about topic X, so you group them together. You notice several are about some
subtopic, Y, so those can go together too. You even tag stuff about topic Z, despite the fact that topic Z pops up all over the place in contrary to your top-down categorisation.

So, some hours of work later, you find you’ve got things a bit better organised. You’ve not categorised everything yet, but it’s just a matter of time. You’ll get to it.

Time goes on. Notes get better organised. At the same time, more notes get added. You realise from time-to-time that some old categorisation wasn’t very effective, or that you can also categorise items according to some new tag, and
start shuffling around already-categorised notes. You realise that over time, the work of just managing your collection of notes grows in proportion to the number of notes you have. If work goes on like this, you’ll either spend less
and less time doing anything original as more and more of your time is spent keeping things organised, or you’ll have to give up on the organisation and accept that your notes get more repetitive.

Except that that’s not all that will happen if you stop organising. Any plans you had once will get buried. Jobs that you planned out will never get done, not because the information’s not there, but because you can’t find it. Any attempts to stand back to get a wider perspective on all the little conclusions you’ve drawn over time will become extremely difficult, if not impossible, because you can’t _find_ all those little conclusions you’ve drawn.

So what happens?

I’ve been thinking about what happens to society. Already we’re suffering, not from information overload, but from information disorganisation. Physicists need to know about the particles or waves or structures they’re studying, but also need to be able to do some pretty fancy maths in order to achieve anything. Biologists need to know about cells or organic molecules or organ structure or many other things, but in order to do their work, geneticists need complex computer algorithms to analyse anything, pharmacists need a lot of complex chemistry to engineer their drugs, and pretty-much any biologist needs to be able to handle a lot of statistics to prove anything (at some confidence interval).

So, what’s happened? We’ve specialised. Now, more than ever, young people go to university to study maths or English or chemistry or one of many other subjects. Now, more than ever, young people go on to do PhDs — but even if they don’t get that far they’ve already had to specialise from being a computer scientist to focus on algorithms, or language theory, or machine learning, or databases, or computer vision, or encryption, or data transmission and information theory, or one of many other things. There’s no such thing as a generalist any more. Is that a good or a bad thing?

Well, judging by the incessant discoveries in medicine, in computing, in climate science and in science in general, one can hardly say it’s not worked out. I won’t go on about this, because there’s no point — we’ve been developing new drugs, faster computers, better telescopes, etc., for decades, and there’s no sign that this is about to slacken off.

One thing that has got harder though is using results across fields. As the lowest hanging fruit in mathematics has got harder and harder to reach, new developments become harder and harder. That the proof of Fermat’s last theorem took so long is because it required the co-use of so many areas of mathematics. That the Polymath project had such success proving the density Hales-Jewett theorem was due to the fact that it allowed the collaboration of many different mathematicians working in different backgrounds. So what might be possible with massive collaboration across many fields of science? What might be possible by combining the knowledge of the whole of humanity in one place? Proving this point with an example is obviously beyond my capability, but I hope you get the picture that the information processing resources available to individual humans have an enormous effect on society. So big, in fact, that I can only see four possible outcomes:

  1. We continue as we do now. Here and there communication of ideas and results get optimised somewhat, general education may improve, big organisations continue to out-compete smaller ones due, often enough, to being able to employ specialists in more overlapping areas. Scientific advances continue to be made, but we remain a human society, working within human limits. Fundamentally this outcome is unstable due to the following possibilities, but striving as we do for control and economic return we may be able to keep it up for a long time yet. 
  2. At some point, things slip up. Society gets more fragmented as we continue to specialise, and instead of asking people specific questions about there business (you had any problem with foxes lately?) we are more and more reduced to asking peripheral questions (how is work?). Key knowledge is lost as people die or migrate or fall out with each other, university level education harder as professors have to focus on more specific fields, and in the end society fails to  rovide enough young scientists to replace the old, resulting in a spiralling collapse of much of science. We don’t lose our cars or computers or Airbus A380s, at least not immediately, because people already know how to build cars and  computers and Airbus A380s. But scientific advance collapses and maintaining society as we know it becomes a struggle. 
  3. Genetics or implants or some type of human modification enables us to become smarter. People can take in larger amounts of information and process it farther. Temporarily, the new “super people” pick up knowledge across many different topics and produce many new discoveries. But eventually they become the norm, and society — a faster moving, more energy intensive society — has make even better people to keep moving as people have accustomed to.
  4. Computers can already process information a lot faster than us — and computers are, and probably will be for a while yet, increasing in capacity exponentially at a very fast rate. If they get smart enough, they may end up doing our reasoning for us. At first, of course, we’d remain in control — but with them being a lot smarter than us (or at least able to take a much broader point of view), along with the constant pressure to let those best at doing anything organisational do it, it would be almost inevitable that they would end up running society, perhaps leaving us as pets to them as dogs and cats and monkeys are to us.

I won’t say that one of these scenarios is better, or more likely, or more preferable than another, because I didn’t write this article to talk about armageddon. But I will say that information is more important to us than ever.

Posted in Uncategorized | Leave a comment