Sunday, December 19, 2010

Browser behavior

We've already got controls in the browser to do things like reject attempts to set third-party (not belonging to the domain of the Web site you're visiting) cookies. I think we need to extend this idea further to make more behaviors dependent on the first-party vs. third-party distinction:

  • Allow sending of cookies to the site you're visiting while not sending cookies in requests for third-party content. Eg. Facebook's cookies get sent when you're visiting Facebook itself, but if you're visiting CNN's Web site then when fetching the Facebook Like button on thier page Facebook cookies are not sent.
  • Allow Javascript to run when it's fetched from the same domain as the site you're visiting, but not allow it to run when it's being fetched from a different domain.
  • Allow image blocking to block only third-party images.
  • In all cases where there's a block-list in the browser, allow for three settings: allow always, allow first-person only (block third-person), and block always.

Tuesday, November 30, 2010

IPv4 address pool draining fast

We're down to just seven /8 netblocks left. Those are the blocks assigned to the Regional Internet Registries (RIRs) who hand out blocks of addresses to entities needing to connect to the Internet. That means we've got effectively 2 blocks left, since when it hits 5 unallocated blocks each of the 5 RIRs will automatically get one of those 5. That'll exhaust the pool of addresses ICANN can allocate to RIRs.

That won't mean too much immediately. The RIRs have unassigned space they can keep handing out. But they won't be able to go to ICANN to get more blocks. That means that when they assign their last space, that's it. Finished. No more. You want on the Internet? Sorry, no addresses left the RIR can give you. It won't be a big cliff, but gradually there'll be more and more problems. Hosting centers won't be able to add more machines because they don't have addresses to give them and can't get any. Consumer ISPs will have problems signing up subscribers because all the addresses available in that area are in use and the ISP can't get more address space. I figure it'll take about 6 months to a year to really come to a head.

Me, I'm going to finish prepping my LAN and gateway for full IPv6 capability and setting things up to run IPv6 internally in parallel with IPv4. That way I'll be ready for the inevitable switch to IPv6 by Cox. And I'm going to make sure any routers I buy will handle IPv6.

And I really ought to work out how to load custom firmware into Netgear routers and access points. I've things I want to do with them.

Sunday, November 14, 2010

Another reason to avoid the Windows Phone 7

Apparently the OS on Windows Phone 7 permanently modifies SD cards. Now, bear in mind that the card slot involved isn't an externally-accessible one, it's under the battery like the socket for the SIM card and you can't readily swap SD cards in and out of it. I suspect it's meant to offer carriers expanded storage for stuff that the user can't mess with or replace/upgrade themselves (if the carrier does things right). But it does bring up one point: Windows Phone 7 devices won't have an SD card slot users can swap cards in. No more dumping your files onto a card and reading them into another phone. No taking the card with phone files over to the computer and reading them in. No external backup of data. No external swappable storage period. To me that's a good reason to avoid those phones. I'd stick with an Android or other smartphone, where I've got the option of external storage.

Thursday, October 7, 2010

URL shortener problem

If you use Twitter, you're probably familiar with the URL shortener service. Even if you don't, you're probably familiar with TinyURL,, or other URL shortener services. They seem convenient. No more having to type or remember long URLs, just create a short one. No problem.

Until went off the air. The domain was siezed by the Libyan registrar that controls the .ly hierarchy, because content at the locations pointed to by violated Libyan morality laws.

This is why URL shorteners are a bad idea. They create URLs that are under the control of a third party and which can be disrupted at any time. Since there's no direct mention of where the shortened URL points, once disruption happens it's impossible to locate the original destination. If you use the actual full URL, disruption can only occur if the actual site referred to is taken off-line.

Note also that this is why you should make your own copy of content if you really care about having it available. If you merely link to it, it's vulnerable to the destination taking it down or just changing what it says. Only when you control the copy can you insure that it doesn't change or become unavailable in the future. This may annoy copyright holders, however I feel that if I'm writing commentary on what someone said then making a copy to insure I can prove they did in fact say what I claim they said falls under fair use, and making a complete copy is neccesary to show that I'm not merely cherry-picking and taking bits out of context to misrepresent what was actually said and so also falls under fair use.

Saturday, July 17, 2010

DNS root zone is now signed

The DNS root zone is now signed via DNSSEC. The idea behind DNSSEC is that the owner of a zone (roughly a domain) generates a public key and their DNS servers will digitally sign the records they serve up. Intermediate DNS servers will preserve those signatures, allowing querying machines to determine whether the records have been altered from what the authoritative nameserver sent. This makes it a lot harder to do a man-in-the-middle attack against DNS, hijacking a caching nameserver (say one belonging to an ISP) in order to re-route traffic to an attacker's servers. Not impossible, but it's a lot more involved. That's because the public key needed to verify a signature is returned from the zone above the signed zone and is signed by that zone, eg. the public key for's records is returned from the .org zone's server and the key for the .org zone is returned by the root nameservers. So for an attacker to forge records, he has to subvert the entire chain back to the root. Each verifying machine has the single key for the root zone pre-loaded (and presumably verified out-of-band to make sure it's valid), so it's infeasible to fake signatures on records for the TLDs (eg. .com, .org, .us). If I can control the records returned for .org queries I can substitute my key for's, allowing me to forge signatures on records. But since I can't substitute my key for the root key I can't fake signatures of the .org records containing the key, and any verifying server will detect my forgery.

That's great for security, but it poses a problem for some (IMO unethical) ISPs and DNS providers like Network Solutions. That's because they've been playing a game: when someone asks for a domain that doesn't exist, instead of returning NXDOMAIN (non-existent domain) for the query they've returned a valid result for the name pointing at their servers which serve up advertising, search results and the like. Essentially they take ownership of every single invalid domain and slap their advertisements on it. But as soon as downstream DNS servers (eg. the ones in every home router) start verifying DNSSEC signatures, the gravy train ends because those ISPs and DNS providers have no way of forging valid signatures. The only exception is that the registry operator can forge results for completely unowned domains within it's scope, and the most common DNS software around has a flag to stop that (TLD servers are expected to only delegate to 2LD servers, they should never return actual results so any results they try to return must be faked and should be treated as NXDOMAIN).

Monday, July 12, 2010

Cloud storage-as-a-service

Triggered by an article by Phil Jaenke.

You probably saw the announcement about EMC's Atmos Online shutting down. ArsTechnica had an article about it too. The short and sweet: if you were using Atmos Online directly, they aren't guaranteeing anything (including you being able to get your data back out). If you're an enterprise thinking about cloud storage as an alternative to maintaining expensive disk and/or tape in-house to hold all your archival data, this gives you something to think about.

Now, frankly, you should've been thinking about this anyway from the moment you started thinking about contracting with a vendor to store your data. Putting the magic word "cloud" in the name doesn't change the basic fact: you're putting your data in someone else's hands. When you do that you always, always account for things like "How do I get my data back from them?", "What happens if their facilities suffer damage?" and "What happens if they decide to shut down?". And you don't depend entirely on contract terms and penalties. Knowing that you can take your vendor to court and force them to pay up eventually, maybe, assuming they haven't declared bankruptcy, doesn't get you the archival data you need, and the IRS and the financial auditors and the rest won't really care whose fault it is that you can't get at data you're legally required to have available because it's your responsibility regardless.

There's also another question: how about security and privacy? Yes, against hackers attacking your supplier's network, but not just against them. What happens when your supplier gets served with a court order demanding they turn over your data to the other party in a lawsuit you're involved in? Some of that data might be e-mails between you and your legal department or outside attorneys, and reasonably subject to attorney-client privilege. But your attorneys won't get a chance to review anything before it's turned over, because you won't know it's been turned over until after the fact. How does your supplier handle this kind of situation? What steps are you taking to insure that you can't be bypassed when it comes to getting at your data?

So when IT or management asks about cloud storage, make them answer those sorts of questions first. Or at least make them think about those sorts of questions.

Oh, and the service Phil wrote about? Notice that it uses standard NAS protocols to talk to it's device, and standard formats for the stored data. That makes the question of "How do I get my data back?" a lot easier to answer.

Saturday, June 26, 2010

In re Bilski

Monday is the last day for Supreme Court rulings to issue for this term. So far, no opinion in In re Bilski, the major patent case this term, has come down. Some people are thinking that it'll have to come down Monday, because the Court won't want it to hold over into the next term. PatentlyO makes that argument. They also make the argument that it'd be better for the appellant here to drop the case before the ruling issues, and that the only reason for the appellant to pursue the case is that they want business-method patents to suffer a setback. I think Crouch is wrong, Bilski is appealing only because it's the only way to overcome the setbacks they've suffered thus far (see the documents on the case at Groklaw).

Crouch does make an interesting point, though, and one that gives hope that the Court will uphold the denial of the Bilski patent and, by extension, support the Patent Office's new position that purely abstract things like business methods aren't patentable. That's that Monday is Justice Stevens' last day on the Court. He's also the only Justice who's short on delivered opinions, if he's writing the Bilski opinion it'd bring him right into line with the other Justices. If that's so, Stevens also has a track record in opposition to things like patents on abstract ideas and non-physical things. If he's writing the opinion, it's likely because the opinion was in line with his track record and not favorable to Bilski. This'd be good news for software developers. These days one major problem in software development are patents that are over-broad and vague, with their holders trying to apply them to everything in sight. Or patents on blatantly obvious or long-existing things like a shopping cart (but in a Web browser!). Between Bilski and KSR v. Teleflex, the courts and the USPTO have given opponents of over-broad patentability a lot of ammo. That's also another point in favor of the Court upholding the appeals court in Bilski, that'd be in line with it's thinking in KSR.

The alternative, of course, is that the Court decided to give Stevens a light load because he's retiring and the Bilski opinion will be held over for next term. But we can hope that's not the case.

Monday, June 21, 2010

Tablets, netbooks, laptops and PCs

Forrester Research is predicting iPad sales will tank. I'm not sure about that. In fact I think Forrester is dead wrong. Here's my predictions:

  • Tablets will displace netbooks as lightweight mobile platforms. On their own they're lighter and slimmer than netbooks and work well for media playing, Web browsing and the like. Attach a lightweight Bluetooth keyboard and they're OK for light text entry without needing too many accessories hauled along. And mobile devices like tablets will show high sales because they tend to be replaced relatively often (mostly because they're sold through cel-phone services with 2-year contracts and hardware upgrade offers just before the contract expires to tempt you into renewing).
  • Laptops will keep on being the portable computing solution. You won't take them to the coffee shop, but a single bag's easy enough to haul to a hotel or on a trip where you can set up on a desk. They'll show sales growth but not as much as mobile devices, because the wear and tear on the hardware's greater and you tend to have to replace them every 3-5 years because they're breaking down. And if not, you're seeing big increases in capability (bigger hard drives, larger displayes, lower weights) that make a replacement attractive.
  • Traditional desktop PCs will continue to be a non-growth sector. Everybody who wants one already has one, and they only replace it when it stops working. CPUs and such are already fast enough for ordinary stuff, so there's no real push to upgrade the hardware. When it comes right down to it, though, desktops offer speed, display quality, keyboard quality, peripherals and security/safety that mobile devices lack. Desktops nailed down to a wired network aren't vulnerable to outsiders sniffing traffic. They can support bigger displays, because those displays are sitting on a nice solid desk and don't have to be carried around, and those bigger displays make for more comfortable reading of what's on them. But you don't buy a new desktop every 2 years, you replace them maybe every 5-8 years when they finally do start to break down.

Thursday, June 17, 2010


Yes, names. And the computer systems that handle them. If you write computer programs that handle people's names, read this blog post. Then read this article. Then go back and check your programs for how many of the assumptions in the article they make. Yes, all of those assumptions are invalid. Yes, you will have someone breaking them. Many someones. You'll have more people than you expect using your system. Think about this: right now if something occurs for one person in a million, you can expect more than 300 of them in the United States alone (307 as of July 2009).

And yes, someone out there undoubtedly has in fact legally changed their name to "Robert'); DROP TABLE users" just to be a prat. Your systems should be able to handle him in a suitably boring manner automatically, without needing special coding for SQL injection.

Monday, June 14, 2010

Inertial mass != gravitational mass

One principle of modern physics is that inertial and gravitational mass are equivalent: it doesn't matter whether you're standing on the surface of an object with enough mass to provide a gravitational pull of 1g or on a flat surface being accelerated at 32 ft/sec², the effects of both frames on you is the same.

Well, it turns out that isn't really so. The paper's rather technical, but it turns out at the quantum level you can get things that behave as if they had different masses depending on whether you're looking at gravitational or inertial forces acting on them. This should lead to some interesting physics in the next few years.

Wednesday, May 5, 2010

Title I, Title II

Well, the broadband Internet providers may have gotten their wish, and it'll be ashes in their mouths. A while back, Comcast won a legal victory when they got a ruling saying that the FCC didn't have any authority under Title I to regulate them (and specifically their throttling and shaping of bandwidth based on content and the destination the user was trying to get to).

Well, that's all well and good, except that there's this other part of the relevant law known as Title II. Title I applies to information services. Title II applies to telecommunications services. And the FCC has specific legal authority to decide which one Internet service providers fall under. So the FCC's going to simply shrug it's shoulders and reclassify Internet service as a telecommunications service (as it was back before the Bush era's reclassification of it) falling under Title II. That gives them plenty of authority to impose all sorts of regulations the ISPs don't like, although the FCC's proposing putting in place some binding rules limiting the amount of regulation actually imposed.

Advice: don't taunt the bull if you don't want to get the horns.

.xxx domain argued for

ICM Registry, which has submitted the .xxx TLD for approval, is pushing for it's approval and adoption. The argument is that parents can then filter out that domain to block inappropriate content.

Yes, well, I suppose that'll work just as well as RFC 3514 - The Security Flag in the IPv4 Header, better known as the Evil Bit.

Sunday, May 2, 2010

Floating-point math

Most programmers don't get it right. They forget that floating-point numbers aren't precise, and that different numbers are irrational in base 2 than in base 10. End result: math errors and weird results.

So, on the web: a basic guide to floating point math.

And for the true math geeks: David Goldberg's paper on floating-point math, and the original ACM journal article if you have an ACM account with library access.

Thursday, April 29, 2010

Signs you're abusing SOAP

Big red flag that you either don't know how to use SOAP or that you shouldn't be using SOAP for this job:

The only argument your call takes is a string, which contains XML which holds the structured data the server needs.

You should be either creating a SOAP object tree corresponding to the XML structure and passing your data as SOAP structured data, or you should be removing SOAP entirely and using simple HTTP POST.

Friday, April 2, 2010

Quake 2 in HTML5

OK, this just takes the cake. They've ported Quake 2 into HTML5 using a ton of JavaScript and such. Details on the developer's web site. I am amazed.

Tuesday, March 30, 2010

SCO v. Novell: stick a fork in them, they're done

It's official: the jury ruled in SCO v. Novell that Novell owns the copyrights that SCO was trying to claim. That pretty much puts paid to all of SCO's dreams of a litigation-lottery win in the IBM case too. IBM's not inclined to settle, and pretty much all that's left is IBM's counterclaims against SCO. SCO has a few scraps of claim left, but all the evidence they're presented put together doesn't amount to enough to make a porn starlet's bikini.

Here's the actual jury verdict form.

Friday, March 5, 2010

eBooks and book sales

There's been a long-running argument about whether making eBooks freely available without DRM or anything like that helps or hurts sales of physical copies of the book. The DRM proponents have usually argued that the people saying freely-available eBooks help sales are depending entirely on anecdotes that don't prove anything. Well, end of that argument. A pair of graduate students at the University of Michigan did a detailed study of 41 titles that were released as eBooks, looking at their sales figures pre- and post-eBook release. In most cases, an eBook release meant increased post-release sales of the books. The exception was Tor's release, which dramatically demonstrates the problems with restrictive eBook releases. Tor made the eBook versions available for only one week, required registration before you could download and generally made it annoying to get the eBook version. This group of books suffered a significant drop in sales, while the other 3 groups studied showed significant increases in sales.

Publishers take note.