Saturday, August 8, 2015

SSL/x.509 certificate infrastructure

After working with x.509 (SSL) certificates a bit, I think some changes in the infrastructure are needed. It's become obvious to me that the major CAs aren't entirely secure, there've been too many compromises of them. In addition they make it too costly to truly use certificates effectively.

For client certificates, StartSSL has the right idea: charging based on the cost of the verification, not per certificate. They'll give you a free SSL client certificate for your e-mail address, because it's all automated and they don't really incur any significant costs above normal operation doing it. If you want a certificate where actual identity's verified by checking passport etc., you pay for the verification when it's done. After that you don't pay anything for getting certificates issued unless/until issuing one requires a new verification. This needs to be the norm for client certificates. In addition we need ways for an organization to issue certificates for it's members. Your employer, for instance, is a much better source of a client certificate tied to your position as an employee because they know for certain who their employees are while a CA doesn't. Ditto my bank: if they issue me a client certificate saying I have such-and-such an account with them, that certificate should be more acceptable to financial sites than a CA-issued one because my bank knows for certain that I've got that account with them and they've followed the required-by-law checks on my identity before giving me that account. And we need to move certificate generation/issuing into mail clients as well as browsers. It's not that hard since most mail clients these days use a browser component to display HTML e-mail and fetch remote content. That component can be used to go through the same process as a browser would. The critical thing is that private key generation is done on the client side and that private key can never exist on the server at any point.

Server certificates are where the real costs come in. It's prohibitively expensive to get server certificates, that's one of the reasons many places haven't gone to strong SSL. It's also difficult to determine from the certificate whether the server really belongs to the organization, because the CA issuing the certificates doesn't know which servers really belong to the organization. I'd much rather see CAs verifying the identity of the organization and giving out a signing certificate that let the organization issue it's own server certificates, with that signing certificate restricted so only certificates claiming the servers belong to that organization are valid. Then companies could issue certificates without worrying about how much it was going to cost. It'd also facilitate issuing client certificates for employees and system-to-system communication. It's not like actually issuing certificates is hard, you can set up Web pages to do it automatically without much trouble and the database to hold all the needed information is fairly trivial if you only need to handle your organization instead of every organization in the world the way CAs have to. Yes, most organizations won't employ the high-level security measures the CAs do to protect the signing keys. OTOH, each organization would be a much less lucrative target because a compromise would only affect one organization's certificates. Because of the way most organizations operate, it'd actually be harder to get at the signing certificates because most of the time they could be on a USB drive in the sysadmin's desk drawer and while physical access might be easy it requires the attacker to be physically at that location and most of them won't be on the same continent as their target.

The above would also make it easier to validate Web sites from the user's end. When I set up access to my bank's Web site, for instance, my browser would ask me to verify the organization issuing certificate. If I said it was OK, the browser would then tie that certificate to a human-readable name for the organization and tie the site I'd visited to that certificate. When I visit a site again, if the organization certificate matches what I have on file the browser lets me in. If the site isn't tied to an organization certificate, it'd prompt me to check the one the site presented and tell it what identity I wanted it tied to. If it was another of my bank's sites (or servers), I'd just tell it "This site belongs to my bank." and it'd make the connection. If the site and the organization certificate didn't match (eg. I thought I was visiting my bank's site but it's not using my bank's organization certificate) I can tell it "No match." and it'd block access. And if I had a match on file but it didn't match what the server was presenting, I'd get a similar prompt but with a stronger warning so I can correct an error if I really need to (eg. the bank's changed things around and I need to change my stuff to match) but it's really likely something malicious is going on and I need to be really sure before I accept it.

All of this would destroy the current business model for certificate authorities, their revenue comes from issuing individual certificates and these changes would mean CAs wouldn't be issuing more than a single certificate per organization instead of dozens to thousands. But there are other opportunities. PGP introduced the idea of "network of trust", where no single signature identified a key as truly belonging to a given individual and you evaluated how many signatures it had and how trustworthy each signature was when making the decision whether the key belonged to who it claimed to or not. The same thing could be done with certificates. I go and get a client certificate from StartSSL. Then when my employer wants to issue me a client certificate I can at the same time submit my own certificate and have my employer sign that as well. When I open a bank account, I can have my bank sign my client certificate as well as issuing me a bank one, leaving my client certificate with 3 signatures on it. Now, suppose my employer's compromised and their signing keys are stolen. My company-issued certificate's invalid, obviously, but my own certificate's still good because StartSSL's and my bank's signatures are still trusted. Same with StartSSL, if they're compromised my own client certificate's still OK because I've got 2 other signatures on it from uncompromised entities. As long as I have at least one uncompromised signature, I can avoid catastrophic failure. This applies to organization and server certificates too, and CAs could replace their revenue stream with one based on cross-certifying certificates so that single compromises wouldn't be fatal to a large number of certificates.

Wednesday, February 4, 2015

Rails ActiveModel/ActiveRecord XML/JSON serialization replacement

I need a bit more control over the XML schema than the standard ActiveRecord/ActiveModel XML serializers provide, and a bit of control over root tags and attribute names in JSON that normally isn't there. I don't need the full control that something like RABL provides, and I'd like deserialization from XML to be automatically available for any XML I can serialize to. None of the existing gems provides what I want without having to hand-code methods on all classes involved, especially the deserialization part. And if I've written the deserialization code and have the hashes to control that, the serialization code based on those same hashes won't be that much additional work. The hard part will be eventually making it a drop-in replacement for the ActiveModel XML and JSON serializers and compatible with the parts of ActiveRecord that interact with serialization. I'll deal with that later, though, I'm just going to lay the groundwork for compatibility now and deal with actually integrating it once I've got the code working outside that framework.



XML serialization/deserialization description and control data:

Item hash:
  • name: element or attribute name
  • source: object attribute name, literal data string, nil for no contents
  • +type: Ruby class name, Boolean short for "TrueClass or FalseClass", Literal for a literal element (no source attribute), default String if this is not the root element or the class being deserialized for the root element
  • options: {options hash}, default {}
  • *attributes: [array of item hashes describing attributes of this element], default []
  • *children: [array of item hashes describing child elements of this element], default []
+ = needed only during deserialization
* = valid only in element item hashes, ignored in attribute item hashes

Options hash:
  • *default: default value if source's value is nil, default no value
  • trim: true/false, trim leading/trailing whitespace, default false
  • elide: true/false, omit if value (after trimming if relevant) is nil or empty, default false
* = not applicable to items of Literal type or items where source is nil.

Control attributes on the object:

xml_attributes: an item hash describing the root element of the object
  • Canonically the source would be nil, but a source attribute is legal if the root element will have actual text content.
  • The type attribute of the root element defaults to the type of the object if not specified, and canonically it isn't specified since forcing a mismatching type will cause problems.
  • The name attribute can be overridden by specifying an element name when serializing the object. When deserializing the root element's name is ignored and it's assumed the caller is deserializing the correct class.
attributes: a hash describing the attributes to be serialized
  • The key is the source attribute name.
  • The value is nil or a string giving the element name to be used.
If xml_attributes is not present, attributes will be used instead. One or the other must be provided.

When serializing XML:
  • If attributes is used then the attributes are serialized as child elements of the root and the default for include_types is true and use_source_names is true if the value is nil and false if the value names an element name. If xml_attributes is used the default for include_types and use_source_names is false. This allows the XML to be unchanged if an old-style attributes hash is being used.
When serializing JSON:
  • If xml_attributes is used then the attributes sub-hash is serialized first followed by the children sub-hash. Items with a nil source or Literal for type are ignored, they're relevant only to XML serialization.
  • When deserializing, if type is absent then the type is determined by Ruby's default rules and the format of the value. Contained objects will end up deserialized to a hash and a class-specific attributes= method will be needed to recognize the attribute name, create an object of the correct class and initialize it with the hash before assigning it to the attribute.
to_xml options:
  • root_name: name of the root element if not the class name of the object
  • include_types: include type and nil attributes in XML
  • use_source_names: follow Ruby's naming conventions for element names
root_name is normally used when attributes is used to control serialization, or if the root element name in xml_attributes needs to be overridden. Setting both include_types and use_source_names to true will yield the same XML normally produced by the old serializer when using xml_attributes .

to_json options:
  • include_root_in_json: include the class name as the root element of the JSON representation, default false
  • include_all_roots_in_json: include the class name as the root element for all contained objects (implies include_root_in_json), default false
include_root_in_json only puts the root element in at the top level, not on contained objects. That makes it compatible with the JSON expected when the matching flag is set in the from_json call.

Monday, July 16, 2012

Merchant charges

My American Express card number got "stolen" and used for a fraudulent transaction this morning (caught by fraud detection and declined after I was contacted). Last month it was my Visa debit card. What annoys me is that this doesn't have to be possible. Right now it's possible because our system expects me to give the merchant my account number and have them initiate a charge from my account. But why does it have to work that way?

Suppose instead it worked thusly:
  • The merchant gives me a merchant account number, transaction code and amount.
  • If I'm making a purchase on-line, I go to my bank/issuer's Web site and enter an order to send a payment for the amount in question to the merchant's account, referencing the transaction code.
  • If I'm making a purchase in the store, I hit my bank/issuer's app on my cel phone and do the same thing.
  • If they don't have an app, I use the phone's browser to go to their mobile Web site and do the same.
  • If I don't have data/Web access from my phone, I call an automated phone line and do the same (phone number verified by the automated billing info on the call).
  • The bank/issuer sends the payment to the merchant.
  • The merchant verifies the payment was received, and gives me my merchandise.
Now it's all but impossible for a fraudster to use my card. Merchants don't need to know my card number, so there's nothing stored on their systems for anyone to steal. My bank/issuer site login information's stored on my systems, they're under my control so it's a lot easier to take steps to prevent compromise (and if they're compromised they were under my control so it's a lot easier from a legal standpoint to justify saying that I'm responsible for the problem, and I can change passwords in just one place so it's easier to fix any compromise). It doesn't even require any infrastructure, banks and credit-card companies already have the networks in place to do electronic funds transfer (it's how they already handle the daily settlement with merchants and how they handle charging your card). So why do we accept fraud and the attendant problems when there's an alternative available?

Of course, there's always the case where you don't have a phone or any other way of initiating a transaction. But we have physical cards, and identification. Standard swiped transactions can continue to work, although they'd be considered a higher-risk transaction. Just go back to where we were when I was starting out in the world: when you present a card the first thing the merchant asks is "Photo ID please.". That'll cut down on card-present fraud, it's harder to fake two forms of ID and the fraudster has to balance the cost of a good forged driver's license against the amount he can purchase without tripping red flags. And we're reaching the point where even kids have cel phones with data plans. That adds another layer: someone who normally does bank-initiated payments suddenly doing a card-present swipe is abnormal activity and a big red flag saying "Potential fraud! Contact the cardholder to verify.". That adds another hurdle for the fraudsters: they don't just have to fake the card and photo ID, they have to have a card that's regularly used for swiped transactions. Merchants don't have to store card information for swiped transactions, so it limits the fraudsters to skimmers or compromising merchant point-of-sale systems. In the process it also gives me, the account holder, the option of removing myself from any risk of compromise by getting a suitable cel-phone and avoiding swiped transactions entirely. I can still leave myself open to fraud, but it's my choice and I get to balance the cost vs. the risk instead of depending entirely on merchant security.

So why are we still open to card fraud?

Wednesday, October 19, 2011

Future plans for Silverglass

I've got two general plans. One is to move a chunk of the site over to Google Sites as a short-term measure. XMission's done a good job of hosting the web site and e-mail, but I'm wanting to do things I can't do on their service. I want to run my own e-mail server, for instance, so I can provide service to a few friends. I want an external shell server, and I want a Web server I can set up to host other domains if I want to. I need my own host under my control for all that. The problem is I can't set all of that up overnight, so I need a temporary place to hold the web site while I sort out all the rest.

Long-term I want to set up CMS and website-editing software that'll let me do a lot more with the site without having to mess with editing HTML directly, and move the site completely onto my own server. Google Sites is nice for content, but I can't run custom back-end applications on it. I've got a few webapps I want to play with, and I need a server I can run them on. I may start by keeping the text portion on Google Sites and using my own server only for the webapps, but long term I want to be self-sufficient. The trick will be to find software that lets me create page templates and fill in just the content without limiting me to only doing that. Oh, and a model that lets me create/edit everything locally and publish to the site only when it's all correct is a must. Live editing of the site is a no-no.

Monday, August 22, 2011

PC vs. post-PC

To me the whole argument doesn't make sense. The world simply isn't going to dump what we consider the PC, it's just too useful. To me the argument boils down to one of form factor vs. usage:

Desktop. A standard large-case PC with a separate keyboard, mouse and monitor. It's advantage is, to quote Tim Allen, "MORE POWER!". It can run pretty much the most powerful CPUs around full-bore constantly without overheating. It doesn't worry about draining a battery. It can have a lot of screen real-estate with multiple large monitors. It can have a full-sized keyboard and mouse. It's got lots of relatively fast local storage, and continuous network access. It's nailed down to a single location and tethered to the network via a wire, which makes securing it easy. And while that's also it's biggest downside, the spot it's at is also where most of an office worker's work is anyway. It's most useful when you've got a lot of text to type, data to enter, output to view. It's, quite bluntly, the full-sized pickup truck or van of the computer world.

Notebook. Not the ultra-light, the totable one you carry around in a bag. You plop it down on a table, plug it in to power and maybe a network jack, and you're good to go. It's got a smaller keyboard, mouse and display than a desktop, and not as much horsepower under the hood, but it's ideal when you need something you can plop down anywhere but not something you have to constantly carry around. Think a businessman at a hotel, or someone out at a customer site for the afternoon. It's literally a portable substitute for the desktop, not a mobile device.

Netbook. This is the ultra-light. It's small, low-powered and easy to carry. Ideal for when you need to do some stuff that involves a keyboard but need something you can carry around all the time. It's what you might take to the coffee shop when you expected to be doing some IMing or light typing, but didn't expect to be typing page after page. Typically has a wireless network interface, and connectivity is assumed to be intermittent.

Tablet. Something from about the size of a paperback book up to a clipboard. No keyboard, no mouse, it's based around a touchscreen interface. You aren't going to be doing a lot of typing of text on this, the interface just isn't suited to it. But you can browse the Web easily, and deal handily with applications that involve selecting things from what's displayed on the screen. It's network interface is wireless, and it can't assume a constant network connection. Great for a lot of floor work like taking inventory, and times when you need to view the Web but not enter a lot of data.

Smartphone. Sizes up to about the size of a paperback book. It's mostly a scaled-down version of a tablet, the much smaller screen size being the big distinguishing feature. Where a tablet might be able to get away with the same sort of presentation you'd see on a notebook or desktop system, there's no way that works here. The other big difference is that, since it's expected to be connected to a cel phone network, it assumes a more constant network connection. It may take advantage of a wireless network if one's available, but if no it's still got a data connection through the cel network. Where a tablet's used to a greater degree for processing data and doing things, the smartphone form factor's used more for communication. Voice, video, IM, it's a pocket-sized device for talking to other people that oh, by the way, can also display data. A tablet, OTOH, is a device that displays and lets you manipulate data and oh, by the way, can also probably be used as a phone.

You can see how different the various categories are in terms of what people do with them. I think people don't want a replacement for the standard desktop PC, unless it's to replace it with a laptop that can accept attaching a larger monitor and a better keyboard and mouse for desktop use. And even then a lot of people would wonder why bother, since they don't want to take their desktop home with them. They either want something else instead of the desktop, because they don't do the things a desktop is good at and spend all their time doing things the netbook, tablet or smartphone form factors are good at, or they want those other form factors as an adjunct to the desktop with data shared between the various units. In large part what they want is the way Google services work: I can tie my Google address book and calendar into my e-mail program and when I update a contact there or through the Web interface the changes automatically show up on my phone, and vice-versa. If I change a document on the tablet, I want to see the changed version on my desktop system. I may want to make up my grocery list using a tablet as I check the cupboards and freezer for what we're out of, and then be able to call up that shopping list on the pocket-sized smartphone while I'm shopping and all I'm doing is scrolling through the list and marking things off as I get them (I could carry the tablet along, but it's a little big to fit in a pocket conveniently).

But in the end, I don't expect people will give up their big desktop systems for things like gaming or typing long documents or working with large spreadsheets or all the other stuff you do at the office. And I sure don't see PC gamers giving up their systems, there's just too much stuff you can do with a full keyboard and mouse that console games don't have the input devices to match. We'll be in a post-PC world in the same way IT is in a post-mainframe or post-COBOL world, or the way computers have led us into a post-paper world, or the way nuclear reactors and solar power have led us into a post-steam-boiler world. Oh, that's right, nuclear reactors and most large-scale solar farms just... boil water using a new heat source. Mainframes and COBOL are still firmly entrenched. And paper, well, "It's the 21st century. Where's my paperless office?". I don't expect "post-PC" to be any different. We'll add new devices and new capabilities, and the PC will adapt to work with them without ever disappearing (or even becoming less of a central player).

Friday, August 12, 2011

Google+ (and other social-media sites) privacy issues

A lot's been said about the privacy issues of Google+. I'd note that there's a flip side, too. Robert Heinlein pointed out that one of the best ways to lie is to tell the truth, but not all of it. Sites like Facebook and Google+ can be turned around and used to lay down the trail you want other people to find. It doesn't have to be a complete trail, just convincing. When someone goes looking, they'll find the trail you want them to find. And since they have found a trail, often they won't go looking for other trails. And if they do and you catch them at it, you have a good case for harassment against them. After all, they'll have to admit that they did find the data on you, and that it all pointed to completely uninteresting places and results, and exactly what evidence do they have that there's anything more? None.

It's a piece of advice for the Evil Overlord's Accountant: keep 4 sets of books. The first set contains records that are completely and utterly clean and prove that the Evil Overlord is a saint and completely and utterly above suspicion of even littering. The second set, which you reluctantly let investigators find if they aren't buying the first set, contains records that match up with the totals for the first set but have some transactions that, while they appear illegal at first glance, turn out upon further investigation to be merely shady and embarrassing but completely legal. Any investigators will probably have stirred up some trouble with their efforts to uncover this second set, and after getting all excited about their initial findings will likely have egg on their faces after it all turned into duds on them, and their superiors will be more than happy to just drop the investigation before they're embarrassed any further.

Apply this tactic with social networks. If you have things to hide, set things up so you're easy to find and lay down a nice innocuous trail using those profiles. Then quietly do anything you don't want people finding out about under alternate identities that don't have any connection to your public profile. After all, it's easy on even Google+ to set up a profile under a fictitious name, as long as the name itself doesn't draw attention and you're discrete about what information you fill in. Just remember that these sites record IP addresses, so use some form of proxy to avoid linking profiles by "they're accessed from the same computer".

Wednesday, July 27, 2011

Google+ real-name issue

I can see the arguments on both sides, and they aren't mutually incompatible.

People want to be anonymous. They want to eg. criticize corporations and governments without necessarily opening themselves up to having government agents showing up at their workplace, or corporate lawyers contacting their employer. More innocuously, they want to have a private side of their life that isn't mixed up with their work and professional life.

Other people want continuity of identity. They want to know that when a given name speaks it's the same person, and that a single person can't trivially have a multitude of identities and turn themselves into a virtual crowd that can drown everyone else out or create the illusion of popular support for a position that lacks such support.

Google wants things tied to a single identity, so they can link activities to individual users in a coherent manner. They want to know what your search history is so they can correctly determine what's popular and what's not.

This can in large part be done by disconnecting your profile from your account. Your account gets verified, making it possible but non-trivial to create multiple accounts. Only Google gets to see your account. You can then create one or more profiles linked to that account. The profile can have any name you want, any other information you want, and the information doesn't need to match from profile to profile. They all tie back to the account so Google can audit and track things, but that link's invisible to the public. If someone thinks you're creating sock-puppet profiles, they can file a complaint and Google can verify whether the profiles are all linked to the same account or not without needing to let the world see the link. People can be assured that a given profile belongs to the same person over time, and a name can build up a reputation without people needing to know the exact person behind the name. The profile's identified by a unique identifier like a profile ID, not by the name, so you can see if a different profile's trying to masquerade as an established name. Most everybody's requirements are met.

The problem comes with anonymity. That depends entirely on Google's willingness (and legal ability) to say to someone showing up demanding the account information behind a profile "Do you have a court order? No? Then come back when you have one.". And that in turn depends on a more basic legal/political issue: requiring someone to show that the person behind the identity has in fact done something actionable before being allowed to demand their identity so legal action can commence. To me that sounds reasonable, and in fact the law technically requires charges to be supported before they can even be filed, but in practice the courts seem reluctant to tell a plaintiff or prosecutor "You haven't supported your charges. Go away until you can.". That's a political problem, one that needs addressed by pressuring the politicians to state explicitly that plaintiffs and prosecutors aren't allowed to file charges or commence legal action without solid evidence to support the claims already being on the table. Until that happens there's no way to both allow anonymity and be able to hold bad actors responsible for their actions, and you can't have a civilized environment without both of those.

Sunday, June 12, 2011

Microsoft dumping .Net for HTML5/Javascript

Mike James wrote a blog entry about this. My immediate thought: what did you expect? Microsoft sets their roadmap based on their business needs and what'll benefit them, not what'll benefit their developers. And Microsoft is focused on one thing: insuring that Microsoft controls everything. When they think "cross-platform", they mean "across all Microsoft platforms". They want it to be easy to have apps run on all their platforms, from desktops to phones, and to be as hard as possible to make those things run on any other platform. If things do run on other platforms, they want to insure that Microsoft is a required part of the infrastructure anyway.

C# and .Net and Silverlight are problematic for Microsoft because they can run on other platforms, and when they do other Microsoft products and platforms aren't required. It's not easy but the Mono project has demonstrated that you can create a C# compiler and .Net runtime for Linux, and once it exists for Linux it's a lot easier to port it to any other Unix variant. Silverlight will likely fall into the same category. But HTML5/JS has one big advantage: it's limited enough that you can't create an entire complex application in it. You can do the UI part, but you're going to need a back-end to handle things that just can't be done in Javascript. Microsoft's hope is to use the standardized nature of HTML5/JS to sell management by making sure the "standards-compliant" checkbox is checked, while doing some things under the hood and in the client/server communication with the back-end to insure that you'll need a Windows system in the mix to successfully run one of their new apps. Think back to ASP and how it got tied into IE-specific bits of Javascript and HTML to the point where it was hard to make ASP webapps work right anywhere but in IE. How successful Microsoft will be with this is up in the air, but I think their goal is what it's always been: to make sure it's Microsoft and Windows everywhere, for everything, to the exclusion of everything and everyone else.

As a developer, make no mistake: Microsoft does not care about you. You are important only insofar as you buying into their roadmap is needed for it to succeed. Once that's no longer needed you're at best unnecessary and at worst competition to be eliminated (every sale of software you write is the loss of a sale of something Microsoft wrote or could write, and you selling your software at the expense of Microsoft doesn't benefit Microsoft).

Thursday, April 28, 2011

Sony PSN compromise

We've all seen the news about Sony's PlayStation Network being compromised. It's bad enough the bad guys got personal information. It's worse that they got credit-card numbers. But it's downright unbelievable that they got passwords!

OK, first off: users are partly to blame. You shouldn't be sharing passwords between services. Whatever password you used for PSN shouldn't be used elsewhere.

But that doesn't excuse Sony. The passwords should never have been stored on the servers. Unix has handled that for years. It doesn't store your password, it stores a one-way cryptographic hash of your password. Remember that it doesn't need to know your password, it only needs to confirm that you know your password. So instead of storing your password it runs it through a cryptographic hash algorithm and stores the result. When you enter your password, it runs what you entered through the same algorithm and compares the result to the stored value. If you entered the right password, the two will match. If they don't, you didn't enter the right password. If you chose a strong algorithm it won't be feasible to take the stored hash value and reverse the process to get the original password, and there's no reason not to choose a strong (SHA-1 or better) algorithm since there's plenty of easy-to-use cryptography libraries out there (many of the best don't even cost money).

And credit-card numbers? In this day and age we should be able to do better verification of credit cards. Check ID for in-store purchases, for instance. But most fraud is on-line, you say? We can still do better. Requiring the CVV2 for all non-recurring purchases, for instance. Or linking your cel phone number to your credit card and using text messages to confirm the purchase. When a non-CVV2 charge is attempted, you get a text from the card issuer with details. You then have to text a charge ID code plus an authorization code (CVV2 or other set value) back to confirm the charge. No confirmation = charge declined. Now to make a fraudulent charge the bad guys not only need to get your card number, they need to clone your cel phone which means they need to know your cel phone number, SIM serial number and IMEI and they have to set up actual hardware. These guys operate wholesale, adding the time to do that work makes an 80-90% dent in the number of transactions they can run which pretty much hoses their business model.

Or better yet, for recurring payments go to a "push" or customer-originated payment system. So for PSN, instead of giving them your credit-card number and letting them initiate charges, PSN gives you a merchant account ID and transaction code. You go to your bank (or more likely to their Web site) and set up a payment to that merchant account for the amount required, using the transaction code as a reference number. Your bank then sends the money to PSN's account. End of most existing types of credit-card fraud, because merchants don't need to know any payment information anymore. The only thing you'd need a normal merchant-initiated charge for is over-the-phone purchases, and even then if you've got a cel phone the verification process above's possible.

So why are we still doing things the old-fashioned, fraud-prone way?

Monday, April 4, 2011

ASP.Net ViewState

Really really dumb idea: using a complex, high-overhead ViewState mechanism to set field values in the Web page returned from posting a form instead of simply setting the field values in the returned page. I mean, the posted page has the values populated, that's it's whole purpose. So when generating the response page, simply set the value attributes to what the fields' values were in the post request. Simple, no? Apparently not for Microsoft.

Friday, February 11, 2011

California stores can't store your ZIP code

A ruling came down from the California Supreme Court that's eminently sensible: your ZIP code constitutes personally identifiable information that can be used in conjunction with your name to determine where you live and exactly who you are, and California merchants aren't allowed to keep it on file. To me this is eminently sensible.

Now, the reporting on CNN is a bit hysterical. The reporting says retailers can't ask for your ZIP code. The ruling, OTOH, says explicitly that retailers can ask for it and use it in conjunction with authorizing your credit card, and notes that this is what the law explicitly says and not their interpretation. It's the recording of the ZIP code for uses other than authorizing a credit-card transaction that the law and this ruling prohibit. This ruling doesn't do a thing to compromise transaction security or identify verification. All it does is remind retailers (and the lower courts) that yes the law really does prohibit a retailer from building a database of consumers and their buying habits without the explicit consent of the consumer. I know retailers don't like that, but them's the breaks. Consumers don't like retailers doing it, and there's no particular reason businesses should always get their way regardless of how their customers feel. Businesses always say that if consumers don't like practices they always have the option of not patronizing those businesses. Well, if businesses don't like California's practices they always have the option of not doing business in California, no? Sauce for the goose is sauce for the gander.

Monday, January 3, 2011

PS3 root signing key revealed

Apparently GeoHot has found and published the root key used by the Playstation 3 to sign and verify games. Not just the public key used for verification, mind you, which is the easy part. They've published the private key used to sign the game executables. With the private key, you can sign your own executables and they'll be accepted by the PS3 without needing any hacks, kludges or bug exploits. This is a big deal because the key's pretty firmly embedded in the hardware itself. A simple new firmware update can't change it. And if new hardware doesn't accept the old key, then all existing games simply won't play on the new hardware.

Frankly I don't see why the console makers are so bent on keeping anything but their approved software from running on the consoles. It makes no sense. Someone who wants to run, say, Linux on the PS3 still has to buy the full-blown PS3 console, there's no way around paying Sony for that. They may not buy any games if all they're interested in is running Linux, but then if they couldn't run Linux they wouldn't be buying those games either, nor would they be buying the PS3. They might use this ability to cheat at single-player games, but what's that hurt? They're still buying the PS3 and still buying the games, and there's nobody else to be affected by their cheating. Multi-player games... hacked games might be an issue, except that those games already have lots of ways of detecting cheating on the server side where it's safe from user intervention. For instance, to prevent target hacks simply don't send the client information about objects it can't see. Not even the best hack can target what doesn't exist.

Sunday, December 19, 2010

Browser behavior

We've already got controls in the browser to do things like reject attempts to set third-party (not belonging to the domain of the Web site you're visiting) cookies. I think we need to extend this idea further to make more behaviors dependent on the first-party vs. third-party distinction:

  • Allow sending of cookies to the site you're visiting while not sending cookies in requests for third-party content. Eg. Facebook's cookies get sent when you're visiting Facebook itself, but if you're visiting CNN's Web site then when fetching the Facebook Like button on thier page Facebook cookies are not sent.
  • Allow Javascript to run when it's fetched from the same domain as the site you're visiting, but not allow it to run when it's being fetched from a different domain.
  • Allow image blocking to block only third-party images.
  • In all cases where there's a block-list in the browser, allow for three settings: allow always, allow first-person only (block third-person), and block always.

Tuesday, November 30, 2010

IPv4 address pool draining fast

We're down to just seven /8 netblocks left. Those are the blocks assigned to the Regional Internet Registries (RIRs) who hand out blocks of addresses to entities needing to connect to the Internet. That means we've got effectively 2 blocks left, since when it hits 5 unallocated blocks each of the 5 RIRs will automatically get one of those 5. That'll exhaust the pool of addresses ICANN can allocate to RIRs.

That won't mean too much immediately. The RIRs have unassigned space they can keep handing out. But they won't be able to go to ICANN to get more blocks. That means that when they assign their last space, that's it. Finished. No more. You want on the Internet? Sorry, no addresses left the RIR can give you. It won't be a big cliff, but gradually there'll be more and more problems. Hosting centers won't be able to add more machines because they don't have addresses to give them and can't get any. Consumer ISPs will have problems signing up subscribers because all the addresses available in that area are in use and the ISP can't get more address space. I figure it'll take about 6 months to a year to really come to a head.

Me, I'm going to finish prepping my LAN and gateway for full IPv6 capability and setting things up to run IPv6 internally in parallel with IPv4. That way I'll be ready for the inevitable switch to IPv6 by Cox. And I'm going to make sure any routers I buy will handle IPv6.

And I really ought to work out how to load custom firmware into Netgear routers and access points. I've things I want to do with them.

Sunday, November 14, 2010

Another reason to avoid the Windows Phone 7

Apparently the OS on Windows Phone 7 permanently modifies SD cards. Now, bear in mind that the card slot involved isn't an externally-accessible one, it's under the battery like the socket for the SIM card and you can't readily swap SD cards in and out of it. I suspect it's meant to offer carriers expanded storage for stuff that the user can't mess with or replace/upgrade themselves (if the carrier does things right). But it does bring up one point: Windows Phone 7 devices won't have an SD card slot users can swap cards in. No more dumping your files onto a card and reading them into another phone. No taking the card with phone files over to the computer and reading them in. No external backup of data. No external swappable storage period. To me that's a good reason to avoid those phones. I'd stick with an Android or other smartphone, where I've got the option of external storage.

Thursday, October 7, 2010

URL shortener problem

If you use Twitter, you're probably familiar with the bit.ly URL shortener service. Even if you don't, you're probably familiar with TinyURL, bit.ly, vb.ly or other URL shortener services. They seem convenient. No more having to type or remember long URLs, just create a short one. No problem.

Until vb.ly went off the air. The domain was siezed by the Libyan registrar that controls the .ly hierarchy, because content at the locations pointed to by vb.ly violated Libyan morality laws.

This is why URL shorteners are a bad idea. They create URLs that are under the control of a third party and which can be disrupted at any time. Since there's no direct mention of where the shortened URL points, once disruption happens it's impossible to locate the original destination. If you use the actual full URL, disruption can only occur if the actual site referred to is taken off-line.

Note also that this is why you should make your own copy of content if you really care about having it available. If you merely link to it, it's vulnerable to the destination taking it down or just changing what it says. Only when you control the copy can you insure that it doesn't change or become unavailable in the future. This may annoy copyright holders, however I feel that if I'm writing commentary on what someone said then making a copy to insure I can prove they did in fact say what I claim they said falls under fair use, and making a complete copy is neccesary to show that I'm not merely cherry-picking and taking bits out of context to misrepresent what was actually said and so also falls under fair use.

Saturday, July 17, 2010

DNS root zone is now signed

The DNS root zone is now signed via DNSSEC. The idea behind DNSSEC is that the owner of a zone (roughly a domain) generates a public key and their DNS servers will digitally sign the records they serve up. Intermediate DNS servers will preserve those signatures, allowing querying machines to determine whether the records have been altered from what the authoritative nameserver sent. This makes it a lot harder to do a man-in-the-middle attack against DNS, hijacking a caching nameserver (say one belonging to an ISP) in order to re-route traffic to an attacker's servers. Not impossible, but it's a lot more involved. That's because the public key needed to verify a signature is returned from the zone above the signed zone and is signed by that zone, eg. the public key for silverglass.org's records is returned from the .org zone's server and the key for the .org zone is returned by the root nameservers. So for an attacker to forge silverglass.org records, he has to subvert the entire chain back to the root. Each verifying machine has the single key for the root zone pre-loaded (and presumably verified out-of-band to make sure it's valid), so it's infeasible to fake signatures on records for the TLDs (eg. .com, .org, .us). If I can control the records returned for .org queries I can substitute my key for silverglass.org's, allowing me to forge signatures on silverglass.org records. But since I can't substitute my key for the root key I can't fake signatures of the .org records containing the silverglass.org key, and any verifying server will detect my forgery.

That's great for security, but it poses a problem for some (IMO unethical) ISPs and DNS providers like Network Solutions. That's because they've been playing a game: when someone asks for a domain that doesn't exist, instead of returning NXDOMAIN (non-existent domain) for the query they've returned a valid result for the name pointing at their servers which serve up advertising, search results and the like. Essentially they take ownership of every single invalid domain and slap their advertisements on it. But as soon as downstream DNS servers (eg. the ones in every home router) start verifying DNSSEC signatures, the gravy train ends because those ISPs and DNS providers have no way of forging valid signatures. The only exception is that the registry operator can forge results for completely unowned domains within it's scope, and the most common DNS software around has a flag to stop that (TLD servers are expected to only delegate to 2LD servers, they should never return actual results so any results they try to return must be faked and should be treated as NXDOMAIN).

Monday, July 12, 2010

Cloud storage-as-a-service

Triggered by an article by Phil Jaenke.

You probably saw the announcement about EMC's Atmos Online shutting down. ArsTechnica had an article about it too. The short and sweet: if you were using Atmos Online directly, they aren't guaranteeing anything (including you being able to get your data back out). If you're an enterprise thinking about cloud storage as an alternative to maintaining expensive disk and/or tape in-house to hold all your archival data, this gives you something to think about.

Now, frankly, you should've been thinking about this anyway from the moment you started thinking about contracting with a vendor to store your data. Putting the magic word "cloud" in the name doesn't change the basic fact: you're putting your data in someone else's hands. When you do that you always, always account for things like "How do I get my data back from them?", "What happens if their facilities suffer damage?" and "What happens if they decide to shut down?". And you don't depend entirely on contract terms and penalties. Knowing that you can take your vendor to court and force them to pay up eventually, maybe, assuming they haven't declared bankruptcy, doesn't get you the archival data you need, and the IRS and the financial auditors and the rest won't really care whose fault it is that you can't get at data you're legally required to have available because it's your responsibility regardless.

There's also another question: how about security and privacy? Yes, against hackers attacking your supplier's network, but not just against them. What happens when your supplier gets served with a court order demanding they turn over your data to the other party in a lawsuit you're involved in? Some of that data might be e-mails between you and your legal department or outside attorneys, and reasonably subject to attorney-client privilege. But your attorneys won't get a chance to review anything before it's turned over, because you won't know it's been turned over until after the fact. How does your supplier handle this kind of situation? What steps are you taking to insure that you can't be bypassed when it comes to getting at your data?

So when IT or management asks about cloud storage, make them answer those sorts of questions first. Or at least make them think about those sorts of questions.

Oh, and the service Phil wrote about? Notice that it uses standard NAS protocols to talk to it's device, and standard formats for the stored data. That makes the question of "How do I get my data back?" a lot easier to answer.

Saturday, June 26, 2010

In re Bilski

Monday is the last day for Supreme Court rulings to issue for this term. So far, no opinion in In re Bilski, the major patent case this term, has come down. Some people are thinking that it'll have to come down Monday, because the Court won't want it to hold over into the next term. PatentlyO makes that argument. They also make the argument that it'd be better for the appellant here to drop the case before the ruling issues, and that the only reason for the appellant to pursue the case is that they want business-method patents to suffer a setback. I think Crouch is wrong, Bilski is appealing only because it's the only way to overcome the setbacks they've suffered thus far (see the documents on the case at Groklaw).

Crouch does make an interesting point, though, and one that gives hope that the Court will uphold the denial of the Bilski patent and, by extension, support the Patent Office's new position that purely abstract things like business methods aren't patentable. That's that Monday is Justice Stevens' last day on the Court. He's also the only Justice who's short on delivered opinions, if he's writing the Bilski opinion it'd bring him right into line with the other Justices. If that's so, Stevens also has a track record in opposition to things like patents on abstract ideas and non-physical things. If he's writing the opinion, it's likely because the opinion was in line with his track record and not favorable to Bilski. This'd be good news for software developers. These days one major problem in software development are patents that are over-broad and vague, with their holders trying to apply them to everything in sight. Or patents on blatantly obvious or long-existing things like a shopping cart (but in a Web browser!). Between Bilski and KSR v. Teleflex, the courts and the USPTO have given opponents of over-broad patentability a lot of ammo. That's also another point in favor of the Court upholding the appeals court in Bilski, that'd be in line with it's thinking in KSR.

The alternative, of course, is that the Court decided to give Stevens a light load because he's retiring and the Bilski opinion will be held over for next term. But we can hope that's not the case.

Monday, June 21, 2010

Tablets, netbooks, laptops and PCs

Forrester Research is predicting iPad sales will tank. I'm not sure about that. In fact I think Forrester is dead wrong. Here's my predictions:

  • Tablets will displace netbooks as lightweight mobile platforms. On their own they're lighter and slimmer than netbooks and work well for media playing, Web browsing and the like. Attach a lightweight Bluetooth keyboard and they're OK for light text entry without needing too many accessories hauled along. And mobile devices like tablets will show high sales because they tend to be replaced relatively often (mostly because they're sold through cel-phone services with 2-year contracts and hardware upgrade offers just before the contract expires to tempt you into renewing).
  • Laptops will keep on being the portable computing solution. You won't take them to the coffee shop, but a single bag's easy enough to haul to a hotel or on a trip where you can set up on a desk. They'll show sales growth but not as much as mobile devices, because the wear and tear on the hardware's greater and you tend to have to replace them every 3-5 years because they're breaking down. And if not, you're seeing big increases in capability (bigger hard drives, larger displayes, lower weights) that make a replacement attractive.
  • Traditional desktop PCs will continue to be a non-growth sector. Everybody who wants one already has one, and they only replace it when it stops working. CPUs and such are already fast enough for ordinary stuff, so there's no real push to upgrade the hardware. When it comes right down to it, though, desktops offer speed, display quality, keyboard quality, peripherals and security/safety that mobile devices lack. Desktops nailed down to a wired network aren't vulnerable to outsiders sniffing traffic. They can support bigger displays, because those displays are sitting on a nice solid desk and don't have to be carried around, and those bigger displays make for more comfortable reading of what's on them. But you don't buy a new desktop every 2 years, you replace them maybe every 5-8 years when they finally do start to break down.