I promised that I would write this, so I feel I really should, and I’ve severely neglected my blog of late. When I was going to write this, it felt pretty topical; it’s about Steve Jobs. Now it’s not topical at all, so you’re just going to have to forgive me.
It’s a really weird thing to say that you’re affected by the death of someone that you’ve never met. I know that I personally was weirded out by the public reaction to the deaths of Princess Diana, or Michael Jackson, and I’m usually not too bothered by the deaths of billionaires, either.
I think that what got to me was that Jobs did that rarest of things; he changed my mind.
One of the principles of logic is that some propositions are axioms; they are not derived from other propositions but their truth is deemed to be self-evident. Everyone has their own set of axioms; e.g. I take it to be true that the universe which I can see and taste and touch is real, and not an illusion that is indistinguishable from being real. I have no way of proving the truth of this assertion, but the alternative is somewhat solipsistic.
It used to be that I was really rather anti-Apple. I believed that Macs were too expensive, that the iPod/iTunes ecosystem was inherently corrupt. Now I have an iPhone, I’m typing this on an iPad, my work machine is a Mac Pro, and I’m seriously considering buying a MacBook Air. What the hell happened?
Honestly, I’m not sure. Every step along the road feels like it made sense; the iPhone 3G was so obviously superior to all the other phones on the market at the time that buying it felt like a no-brainer. The work Mac Pro has POSIX underpinnings without the bullshit that using Linux demands of you. The MBA is not only an object of mouth-watering beauty, as well as being stupidly thin and light. But all, together it represents an enormous shift in what I had come to believe about technology and gadgets.
So that’s why I think I was affected by Jobs’ death; I recently read his biography and it’s plainly obvious that he was not an easy man to work with, or to know. To put it plainly, like many great men, he was often a dick. But, sometimes, by sheer force of personality, he made things happen that were insanely great.
It’s a debate to be had if history is guided by the inevitable forces of economics and technology, or if it’s kicked forwards in great screaming leaps and bounds by great men and women. It’s hard to say; would personal computing as we know it exist today without the original Mac? Would smartphones and tablets have taken off? Would all these things still exist but actually just sort of suck?
I don’t know, but I have my suspicions. And my younger self would probably massively disagree with me.
I like to think of the various technology companies as being armies on a battlefield. This is inevitably going to be a really strained metaphor, as battles as typically fought between only two armies, but let me run with it.
Each army has some advantage, home turf it wants to protect. Apple sits in a curve of a river, defended by fast-flowing water. Google has a spot on a hilltop, with a glorious view. Microsoft has a pass through the mountains. RIM is lying bleeding in a ditch. And so on.
Inevitably, the way to find the company’s turf is to identify how it makes money. Google sells ads on search, Apple sells hardware, and Microsoft sells Windows and Office. Almost every other activity these companies engage in is a flanking offensive, designed to prevent one of their enemies from breaking through and hitting them where it hurts.
That’s why Google has things like Android and Chrome OS, and Microsoft has Bing, which are all absolutely haemmoraghing money. Bing exists to stop Google entirely flanking Microsoft on the web, Android exists to prevent Apple entirely tying up mobile, etc. They’re pre-emptive strikes, to act before it’s too late.
This, incidentally, is why Apple is so furious at Google; Google started the war by striking first at Apple’s home territory. I suppose Google would have been equally furious if Apple fired the first shot into ads or search (a business Apple still isn’t in).
So to understand Windows 8, you really have to understand how the generals at Camp Microsoft think; they have a damned good mountain pass, and so it would be great if their mountain pass could exist in more places, so they could expand their mountain-pass empire. Possibly they need new sorts of passes, because mountains are easier to surmount than ever before. And to be fair, I did warn you that the metaphor was going to get strained. The trouble with their strategy is it really is as stupid as it sounds, and it’s a really good example of that old war saying: “Lions lead by donkeys.”
For those of you who haven’t kept up with the Windows 8 hoopla, it’s going to have a full-screen touch-based UI (codenamed ‘Metro’), which is what you see when you first boot up your machine, and whenever you want to launch a new program. The pre-existing Windows desktop is something you can jump into from this new UI, and the start menu is gone, replaced with jumping back to the full-screen Metro UI.
Microsoft were loathe to admit it, but Metro is clearly a response to the iPad; especially as Windows 8 will run on the power-efficient British-designed (national pride FTW) ARM processors used by the current crop of tablet devices and mobile phones, as well as the Intel-designed processors in current PCs and laptops. The poor power efficiency of the Intel chips are the primary reason that your lap gets rapidly scorched by the searing heat put out by a laptop.
The thing about the iPad is it’s like how I imagine people felt when they first used text-interface computers in the 70s and 80s; right now, they may be difficult and limited, but you know that the descendants of this thing are going to be the future. There will always be a place for the PC as we know it today, just as there’s still a place for connecting to a terminal over ssh and using emacs to edit a cron job, but tablets and similar devices are going to take over an increasingly large amount of our day-to-day needs.
And the iPad is already selling like hot cakes, so Microsoft needs to flank Apple, and they do it the only way Steve Ballmer can comprehend how: make it so something iPad-like can run Windows.
And so Windows 8; a iPad-like touch UI jammed on top of a standard Windows 8 desktop, with ARM support. For Microsoft, it’s an absolutely instinctive response. It’s the reason that Windows Phone 7 is called that, even though it a) Isn’t based on the same code as desktop Windows b) Doesn’t even have windows in the interface. They’re bound, inexorably, to the idea and brand of Windows, even when it doesn’t make sense.
What they’re relying on is their backwards compatibility advantage. Why buy a device with Windows 8 on it? Because it’ll run all your old Windows programs. Because it’ll be familiar. Because, essentially, it’s easier than switching to Mac or Linux. This is especially prevalent for gaming, which is severely underdeveloped on those two operating systems.
The trouble here is threefold: firstly, old programs written for existing Intel-based Windows machines won’t actually work on the new ARM-based devices. Theoretically, they could be reworked and recompiled to run on ARM, but it’s currently unclear if this will even be allowed. Regardless, on Day 1 of Windows 8, hardly anything is going to run on the ARM-based iPad competitor Windows 8 devices.
Secondly, backwards compatibility is a millstone. On a device like the iPad, the contract with applications is that they’re run in tightly controlled conditions; for instance the OS can kill them dead at any moment with little or no warning. The advantages of this are manifold; you’ve got massive security benefits, as well as improved battery life, etc. But you can’t impose conditions like this on applications retrospectively, at least not without enormous difficulty. Look at the kerfuffles around UAC in Vista, and that was merely enforcing what had been recommended practice for many years.
Thirdly, developers are lazy. They could write applications for the new Metro UI, or, they could write a standard desktop application that works on all versions of Windows. Microsoft are promoting a new framework for Windows development using the new Metro UI called WinRT, and it’s going to be a wonderful replacement for Win32, the old API, and it’s going to be wonderful and brilliant etc. etc.
Except that we’ve been here before. Windows codename ‘Longhorn’ was supposed to introduce a new platform for Windows development, called WinFX, which would be the foundation on which the OS rested. Longhorn eventually became Vista, most of the new framework arrived, although as a framework for applications, not actually used by the OS. Sure, some people are using it, I suppose. But it’s hardly taken over the world, and it was backported to XP so that it could be widely used. WinRT isn’t going to be backported at all, so it’ll be Windows 8 only. Incidentally, commenters, I’d be glad to hear of any WPF apps you know of; the only high-profile one I can think of is Visual Studio 2010.
Based on that history, WinRT is going to tank really, really hard. Why would you write an app using it, knowing that you’ll restrict yourself only to people running Windows 8?
Honestly, if I was in charge at Microsoft, I would spin off Metro into a serperately marketed OS. Base it on the existing Windows kernel, but totally rebuild the top layer to jettison Win32, then make Metro the best OS for touch devices it could possibly be, with no compromise to the old way.
Then make a Windows 8 that is essentially very, very dull. At this point, there’s not a lot of innovation to be wrung out of the desktop, so it’s really a polishing exercise. Microsoft mostly makes money by selling OEM copies of Windows anyway, so all it has to do to continue making versions of Windows which are good enough to stop people switching; this is a relatively easy job.
Mashing the two things together is the stupidest thing you could possibly do. You really risk angering people who really just want Windows with windows and don’t want to be using a mouse or trackpad with UI designed to be touched, and on the other side you’re going to do a half-arsed job of being a tablet.
For instance, I’m sure someone is going to try and build an Intel-based tablet, which will get hot, and have a fan, and terrible battery life. And poking desktop apps with your finger will suck, so it’ll include a stylus, which will get lost.
Maybe Microsoft’ll succeed with all this. Maybe. Honestly though, I think it’s going to be a failure. Possibly a failure larger than Vista, although they may have the sense to course-correct in time for Windows 9 so that they don’t permanently damage the dominance of Windows itself.
I think the engineers and designers have really done a remarkable job; honestly, reading about WinRT, and looking at the boldness of the Metro interface, they’ve really done themselves proud. It would have been really easy for them to do what others in the industry have done and just rip off the iPad, and that they’ve tried to reimagine the concept is commendable.
I suspect that the problem comes from the top; Steve Ballmer, the CEO of Microsoft, talks about “Windows everywhere”. It honestly doesn’t make any sense. There’s a reason that Apple didn’t put OS X on the iPhone and iPad, even though the foundation of iOS is the same. It’s a real shame that Steve Ballmer is too stupid to understand that.
And yes, I know stupid is a litte ad hom, but honestly, look at this video; you can’t get much more wrong than he turned out to be.
I’m sure you’ve all heard the news; the OPERA collaboration have taken measurements which seem to suggest that neutrinos emanating from the CERN Super Proton Synchrotron travelled the 455 miles through the Earth’s crust to the Gran Sasso Laboratory at very slightly more than the speed of light in vacuum.
For those not versed in the ways of the physics, a neutrino is a fundamental particle. It’s a lepton, the same family of particles as the electron. Unlike the electron, the neutrino has no electrical charge, and so can only interact via the weak nuclear force. That’s how they can travel hundreds of miles through the earth’s crust; they interact with the matter we’re more familiar with (atoms made of electrons, protons and neutrons) only very, very rarely.
That means to detect them you need huge, super-sensitive detectors, typically built deep underground to screen out the signal you would otherwise get from cosmic rays. One is the Super-Kamiokande detector in Japan, which contains 50,000 tons of water. When one of the rare interactions with a neutrino occurs, the interaction generates a very small amount of light, which is detected and used to infer the properties of the neutrino interaction which caused it.
The OPERA experiment was designed to measure a phenomenon called neutrino oscillation, or neutrino mixing.
There are three types of neutrino: the electron neutrino, muon neutrino, and tau neutrino. The Sun produces a vast number of electron neutrinos as a by-product of the fusion reaction which powers it. When detectors were used to measure the neutrinos being emitted by the Sun, it was discovered that the number was less than would be expected. This became known as the “Solar Neutrino Problem”.
Despite claims to the contrary in certain elements of society and the media, when new evidence is discovered, the theory has to give way. Either the models of what was going on inside the Sun were wrong, e.g. the fusion yield of the Sun was lower that expected, or some aspect of neutrino physics was not properly understood. Cross-checks with other measurements of the Sun indicated support for the Solar models. So the problem was with the neutrino physics.
It had been assumed that the mass of the neutrino was zero; all measurements made had indicated that it was, at the least, very close to zero. However, if the neutrino had even a very small amount of mass, it would undergo a very peculiar phenomenon due to a quirk of quantum mechanics, called neutrino mixing. Essentially, in the flight from the Sun to the Earth, some of the neutrinos would change flavour, from electron to muon, or tau neutrinos. The “missing” neutrinos were there all along; they just weren’t in the form of electron neutrinos that the detectors were capable of detecting.
The OPERA experiment is designed to more closely measure this process by generating a neutrino beam on demand in an accelerator, and then measuring the mixing that occurred while the beam was in flight.
In doing this, they have apparently detected, to a good degree of statistical significance, that their neutrinos travelled superluminally from the source to their detector. This is well-known to be forbidden by relativity, so if this is a true result, then it will require brand-new physics to explain, and could mark the start of a new era of post-Standard Model physics. It would be one of those fantastic moments where something amazing is discovered by people looking for something else entirely.
That said, it could also be a mistake in their methodology. Relativity has stood unmolested for a century; every experiment concurs with it.
When a supernova occurs, as well as a blinding flash, there is also an extremely intense neutrino pulse. So intense that the even with the rare interaction of a neutrino with the matter we’re made from, the pulse could give you a fatal radiation dose. Knowing how far away the supernova is, the lag time between the observation of the light pulse and the neutrino pulse, and just a dash of astrophysics, you can work out how fast the neutrinos must have travelled, and it comes out subluminal.
So the OPERA guys have done the sensible thing; checked everything they could, and published. It’s very probable that it will turn out to be a complex effect they hadn’t fully considered or was unknown at the time of designing the experiment which will explain the measurements.
The real trouble with these sorts of things is how to manage the media, and how to stop them getting over-excited at things that may well turn out to be nothing, c.f. the hints of the Higgs that melted away in the late Tevatron data.
I actually don’t have much of a point to make about all this, except that the relationship between science, the media, and society, means that there’s really great misunderstanding out there about what’s actually going on. Reading the comments on BBC News, especially the worst-rated ones (thankfully!) does demonstrate the mistrust of science and scientists, and a misplaced belief that science is about arrogance and certainty, when it is really more about doubt, and trusting the weight of the evidence. There’s also a certain group of people who seem to be fully unaware of just how well the world actually is understood these days.
It will certainly be interesting to see what happens if/when these neutrinos are shown to be subluminal!
Then, there’s the wackos, who take any new development as an excuse to just make weird and wacky stuff up. But they’re another story, really.
The Government recently set up a website on which citizens can register petitions to the government; Parliament will debate any which cross a threshold of 100,000 signatures.
As of the time of writing, the issues with the most signatures are to bring back the death penalty, keep F1 free-to-air, and to retain the ban on the death penalty. Happily, the petition to retain the ban on the death penalty is absolutely spanking the petition to bring it back. Partly I’m happy about this because — please excuse the ad hominem — I find its originator, Paul Staines (who blogs under the pseudonym Guido Fawkes), to be a fairly unpleasant character whenever I come across his views.
Mostly though, I’m happy about this because I’m fundamentally opposed to the death penalty.
There are multiple dimensions to which you can analyse this debate: the purely practical issues of if it will be cheaper, or safer, for society in the long run to execute people rather than locking them up; and the moral dimension, is it fundamentally right to execute people?
I’m going to declare my bias: I think it’s morally wrong. It is an utterly appalling and regrettable thing for one human being to kill another, wherever and whenever it happens. There are, unfortunately, times when killing is necessary. Times when life must be taken in self-defence, or when it’s kill-or-be-killed, even occasionally in war: few would disagree that Hitler needed stopping.
But it’s important that when you fight monsters, you take care not to become a monster yourself. Hitler had to be stopped because he would have expanded Eastwards until either the Russians stopped him, or he’d enslaved or exterminated every single one of them. A little evil was committed to prevent a greater evil. There was no other choice.
Mostly, though, we do have a choice. When we catch a murderer, we can be better than they are. We don’t have to kill; we have a choice. We can lock them away so they’ll never do harm again. If we’re lucky, they’ll genuinely repent, and become useful members of society again. Gandhi said that an eye for an eye makes the whole world blind, and (even though I’m not religious, I like this bit) Jesus said that we should turn the other cheek. Be bigger, be better, be greater.
That’s the problem with execution; it’s not justice. It’s revenge. And revenge is, for lack of a better word, easy. It’s the easy thing to do, when you’ve been wronged, to gang up with your friends, with society at large, and exert your power upon the wrong-doer, and make them suffer. It’s a damned hard thing to do — and this is what I think Jesus was trying to get at — to resist the urge to do so.
I think one of the most remarkable people I’ve ever heard of is Rais Bhuiyan. In 2001, just after the attacks on 9/11, Mark Stroman went on a shooting spree, killing anybody he believed to be Muslim. Bhuiyan was shot in the face with a shotgun, but survived, although he lost the sight in one eye, and still has shotgun pellets embedded in his skull. Two others were killed. Stroman has since been put to death, despite Bhuiyan campaigning for him to be spared. I find Rais Bhuiyan’s example awe-inspiring.
Personally, I think that spending the rest of your life in prison, potentially decades, is a far more awful thing than to merely die. There an awful lot of things in life which are worse than mere death.
Then there’s the purely mechanical issues. Is is cheaper to just execute someone (nope), does it deter murder (probably not), will innocent people be executed by mistake (quite likely), but honestly those don’t bother me so much; they’re not my primary reason.
I think the very notion that the state be allowed to kill fundamentally brutalises our culture; it is a difference not in kind but merely of magnitude between hanging a murder in this country, and stoning to death an adulterer in Iran.
That all said, the theory and philosophy of punishment is an incredibly difficult topic; the death penalty really cuts to the heart of thinking about what punishment is for, and how best to achieve that end. It’s a topic I hope to return to.
This post about photo start-up Color on the NY Times website makes me utterly despair for the nature of modern capitalism.
Before the company had launched even a single product, Color raised $41 million from investors. That seems, to me, an absolutely stupefying amount of money. Given that Color’s first product (a perplexing and unfathomable iPhone app) flopped horribly, it seems now like an incredibly grave error.
It all stems (the NY Times speculates) from a desire not to miss out on the next Facebook, or Twitter, or Google. That’s all well and good, but in some ways this amounts to a get rich scheme; if it was that easy to pick out the next winners from the next losers, then everyone would be doing it, and we’d all get rich.
In pursuit of easy money, they’re taking unconscionable risks. It feels like many of the bubbles of old; presumably what they’re hoping for is to fund a lot of companies, and then hope that the return on the investment is of such Facebookian proportions that it drowns the losses. It’s madness, because if that scale of success doesn’t materialise, you lose the money.
Finance is, essentially, a utility. The function of it is move money from places where it is in surplus to where it is needed. As always, I grasp towards a physics analogy: it’s supposed to move the market from a higher energy state, to a lower energy state through a path that would otherwise be low probability. In doing so, it allows work in the system; jobs created, products made. The finance industry, for its part, takes a small amount of the energy it releases for itself. Overall, the system, the market, the economy, is better off for the action.
The trouble is that the financial sector has given itself airs. They see themselves now as creators, and players in the system in their own right, rather than the facilitators and plumbers that they should be. So they take risks, play the system, create complicated schemes of financial instruments to manipulate the market to accrue money into their own hands.
The trouble with all of this is that banks can then create situations where they actually help to destroy value. We saw this sort of reckless stupidity in the credit crunch, and we see it here, too.
What we should be doing is encouraging companies to start small, and bootstrap themselves. Make a product, sell a product, make money. Use the money to grow. Take finance where you need it, but only when you need it to make more money, where a return is unlikely. Controlled, steady, sensible growth.
What banks seem to want is growth like an algal bloom, or an infection of smallpox. Big bang growth, get-rich-or-die-trying growth. Short-termist, irresponsible, madness.
I’m honestly a little surprised that the financial industry has survived the last recession relatively unscathed, and apparently with their ways throughly unmended. These people ought to be legally responsible for the harm they do, in the same way a plumber would be responsible for an explosion caused by a botched gas installation. Instead we let them get away with wreaking the most awful harm on our economies, and ruining lives.
In other news, I’m going to try and make a commitment to updating on a more regular schedule, and hopefully being a little more personal with it. Had a bit of a trend to the essayist in recent(ish) posts, so I’m going to steer away from it.
Basically, the problem is that I’m either working or I should be working on my PhD most of the time, so blogging has taken it in the neck. Haven’t even finished my new blog design yet…