Tuesday, December 16, 2008

My network layout

For those interested, this is how I lay out my network.

The center of the network is the gateway/router box. It's got three network interfaces on it. eth0 connects to the wired LAN, network 192.168.171.0/24. Off the gateway is an 8-port switch that handles the computer room, with a run out to the living room and a 5-port switch for the game console and my laptop when I'm out on the couch or I've run a cable outside to sit on the walkway. It's all 100-megabit Ethernet, I'm planning on upgrading to gigabit at some point. Outgoing connections/flows from this network are relatively unrestricted. The only blocks are for DNS, it's only permitted to the gateway box.

eth1 on the gateway connects to a 5-port switch where the wireless access points are attached on network 192.168.217.0/24. This network is considered untrusted, forwarding from it to other networks isn't permitted at the router so machines on it can only talk to each other or the router. In addition the router blocks incoming traffic on that interface except for DHCP and IPSec. This limits what a rogue machine on that network can do. The access points don't have WEP/WPA enabled, but they do do MAC filtering. When I upgrade to a faster AP I may enable WPA using authentication just to annoy the kiddies. The primary use for this network is to carry the IPSec VPN traffic on the 192.168.33.0/24 network. This network is considered trusted, and has the same outbound restrictions as the wired LAN segment. Forwarding between the wired LAN and VPN segments is unrestricted and un-NATed.

eth2 on the gateway is connected to the cable modem and gets it's address via DHCP from Cox. Traffic from the wired LAN and VPN going out this interface is NATed. Incoming new connections are generally blocked, with specific holes opened for connection to services on the gateway machine (ssh, Apache on a high port, the DHCP client). The outside world is considered untrusted. Outgoing traffic has a few restrictions to prevent un-NATed addresses from escaping.

Most security is based on limited physical access to the hard-wired network. The wireless network that can be reached from outside the apartment is treated the same as any network I don't control. Laptops on it should be firewalled as if operating on a hostile network, and use IPSec to make a VPN connection to the rest of my network. This avoids my having to rely on hardware I don't completely control for security.

Tuesday, December 9, 2008

DNSChanger malware

The Washington Post's security blog is reporting on the DNSChanger malware. This stuff isn't new. It does two things: changes your computer's DNS resolver settings so it uses DNS servers belonging to the bad guys instead of the servers your ISP provides, and activates a DHCP server so that other computers (say ones attaching to your wireless network) will take addresses and settings (including DNS server settings) from the malware instead of the legitimate server. The result is that everything you do, and everything those machines do, gets funneled through the bad guys' systems along the way. It's hard for security software to detect this, there's no mismatches and no spoofing to spot as a clue there's something wrong.

On my network, this won't work. The DHCP server part might, within limits. But my wireless access point is on a seperate physical network from the hard-wired machines and the firewall blocks the DHCP protocol between the networks. With the VPN active, that limits the damage the rogue DHCP server can do. And my firewall also blocks the DNS protocol at the edge of my network. While on my network you simply can't use any DNS server except the one I provide. If you try, your query packets will get rejected by the firewall when they try to leave my network. That means if you do get infected by DNSChanger, your DNS will simply stop working completely on my network until the problem's fixed. And my gateway machine, the one machine on my network that gets to do DNS queries to the outside world, doesn't run Windows, isn't used for Web browsing and isn't very susceptible to infection by malware.

Wednesday, December 3, 2008

BodyParts webapp

I finished the first pass at the BodyParts webapp that lets me view and maintain the Legends&Lore body parts inventory for my guild in EQ2. Security framework enabled, anonymous viewing and a login for editing, Tomcat set up properly, Apache configured to front for it. Now I can update the inventory from the in-game browser, no more writing things down on paper and windowing out to update text files.

Next things on the list:
  • Add add/subtract fields and buttons to the editing screen so that instead of having to do the math in my head I can just punch in the number of parts and hit Add or Subtract depending on whether I'm depositing or withdrawing parts.
  • Move the inventory from an XML data file into a real database table. I'll want to move the user information into a database table in the process and change authentication to match.
  • Reconfigure things to have all webapps in their own sub-tree so they don't sit directly at the root of the content tree. I'll have to see whether I want Apache to re-write URLs (so the user sees /webapp/BodyParts/ while Tomcat sees just /BodyParts/) or whether I want to make the change within Tomcat.
  • Change things in Eclipse to not depend on MyEclipse for the Spring framework and supporting libraries. I'm finding that as I get more familiar with things it's easier to edit the XML directly than to depend on MyEclipse's tools.
It's been a useful excercise so far.

Monday, December 1, 2008

MS claims in Vista lawsuit

Microsoft is currently embroiled in a lawsuit over Windows Vista. The plaintiffs there are claiming that Microsoft falsely advertised machines as Vista Capable when they weren't capable of running the Vista shown in advertisements. One of Microsoft's responses to this is that Vista Basic, which the machines are capable of running, is a real version of Vista and therefore there wasn't any false advertising. This isn't going to fly. The question isn't whether Vista Basic is a real version of Vista, it's whether it's the Vista advertised to everyone. And it isn't. It's missing almost all the elements shown in the advertisements, and nowhere in the ads does it say that what's shown may not be present. In fact all the ads emphasize all those elements shown as the things that make Vista Vista. That's going to kill Microsoft.

This isn't a software trademark or copyright case. This is straight-up consumer law that's been around almost as long as car salesmen have been. If you advertise a car on TV and show the supercharged V8, 6-speed manual transmission, sports suspension, full leather interior, full power everything with sunroof version, and that's all you show, and you end the add with "Starting from $9000.", then courts have held that you need to deliver the car as advertised starting at $9000. If the only model you offer at $9000 is the 4-cylinder, 4-speed automatic, cheap cloth interior, manual everything stripped-down model, the courts will nail you for false advertising. You did the advertisement knowing you couldn't and wouldn't deliver what was shown for the price you quoted, and you don't get to do that. Which is why every car advertisement, when they show the price, always says in readable text underneath the "Starting at" stuff something like "Base price. As shown, $LARGER_NUMBER.". And Microsoft didn't do the equivalent. They showed Vista with the Aero interface, emphasized the Aero interface as a defining characteristic of Vista, and never gave a hint in the ads that Vista might not actually contain the Aero interface. A reasonable person, looking at the ads and the "Vista Capable" sticker, would conclude that that sticker meant the machine was capable of running what Microsoft was advertising as Vista. And it can't. And all those e-mails from Microsoft execs show that Microsoft knew it. Bam. Stick a fork in 'em, they're done. They can wriggle all they want, but when it comes down to it they're going to lose on that point for the same reason car dealers lost and had to start adding that explanatory text.

Friday, November 28, 2008

The Srizbi botnet

After being taken down when McColo was shut down, then resurrecting itself, the Srizbi botnet has been taken down once again. It won't last, though, the botnet will start searching domain names trying to reestablish contact with it's command-and-control network, and it's operators will probably succeed We need new tactics:
  1. Instead of trying to block the botnet when it searches for a new C&C network, intercept it. Put something up at the next few domains it'll try that will respond correctly and take control of the botnet. Once you've got control, use that control to have the botnet download a new module that'll remove the botnet software from the computer, then shut down the computer until the user reinstalls Windows.
  2. Start sanctioning users who allow their computers to become infected. Right now there's no significant penalty assessed against the people who continue to allow their computers to be used by the botnet month after month after month. Start penalizing them. If they get infected, their Internet access gets suspended until they can provide evidence to their ISP that they've cleaned their machine up. Second offense, they get a 3-month suspension. Third offense, they get permanently disconnected. It's the Information Superhighway, and just like the real road system if you accumulate too many points we take away your driver's license.
Keeping a computer secure takes some effort and thought, especially if you're running Windows which was designed to be vulnerable. If there aren't noticeable penalties for being insecure, users just won't put forth that effort.

Friday, November 21, 2008

Spam volume

Early last week McColo was shut down. They provided hosting to nearly half the spam-related community. Some people were saying it wouldn't make a difference, the spammers would just move to different hosts and spam would pick up again. Well, according to SpamCop's statistics, spam hasn't even started to return to it's previous levels yet. You can see the near-vertical drop on the 11th when McColo was cut off, and peak levels since then have held pretty steady. I think one of the reasons is that other hosts looked at what happened to McColo and said "We don't want that happening to us.". I mean, it was pretty major: all of McColo's upstream providers simply pulled the plug on them, terminated the contracts and turned off the interconnect ports. When a spammer who got caught in that comes to another hosting provider, that provider's got to look at the potential down-side of accepting the spammer: complete and total loss of all their business. And they can't say "Oh, that'll never happen.", because McColo is staring them in the face saying "Yes, it will.".

This is, frankly, what we need more of: providers who serve the spammers facing a credible threat of being cut off from the Net if they don't do something about the problem on their network. For the provider it's a question of money, and the best way to change their behavior is to change the cost-benefit equation so the cost of hosting a spammer is higher than the benefit from them.

Wednesday, November 5, 2008

BodyParts webapp

Slowly but surely I'm getting my head around writing a web app using the Spring framework. It's requiring a fair amount of work, but it's much easier to understand when I'm actually writing code and seeing it work (or, more often, fail miserably). I need to get error handling working, then adding a new species, and then adding the security bits to allow some users to edit and others to only view the data. Once I'm done I'll not only have a handle on Spring, I'll have a way to edit my guild's body-parts database while I'm in-game.

Wednesday, October 29, 2008

Projects I need to do

  • Clean up LJBackup. It's been sitting on the back burner for way too long.
  • Take the EQ2 body-part inventory system I've got, that's currently based on a hand-maintained CSV file, and convert it into a Spring MVC web app. For low-volume stuff I can run it under Tomcat on an odd port on my gateway machine. That'll let me update the inventory through the in-game browser while I'm actually moving body parts in and out. It'll also be a useful show-off project if I want to move into Java and/or web-app work.
  • Create a utility to back up Blogger blogs, both entries and comments, locally. This should be easier than LJBackup.
  • Multiple new computers. I've got the guts for a new gateway box (obsolete AMD dual-core Socket 939), just need a case for it. I can turn the existing machines into two boxes sufficient for the kids. That just leaves main/gaming systems.
  • I need to scheme some way of getting a hosted server and the time to configure and maintain it correctly. I want to handle e-mail, Web server, DNS and such myself and drop the XMission hosting.

Friday, October 24, 2008

Computer models and the credit crisis

This is more political than normal, but it touches on tech so I'm putting it here. Basically, in the Congressional hearings, Alan Greenspan blamed computer models in large part for the mortgage and credit crisis. I'm sorry, but that's not so. The fault isn't even with the data fed to the models. The fault lies with the people in the banking industry who looked at the models and said "What do we need to feed these models to get the results we want?". They didn't just feed the models wrong data, they outright manipulated the data they were feeding the models to insure the models gave them a specific answer. That's always a recipe for disaster. They knew the models were telling them those sub-prime loans were bad bets, so they created "stated income" rules so they could feed higher incomes to the models and make the models say the loans weren't as risky. They created credit-default swap instruments that they could factor into the models to make the loans appear less risky, and then decided not to factor in the risk of those credit-default swaps also failing.

The banking industry decided what answers they wanted, then hunted around until they got models and inputs that produced the desired result. You can't do that with any sort of model. And if you do, don't blame the models and the techs for your failure to listen to what the models were telling you when that didn't agree with what you wanted to hear.

But then, we see that in IT from management all the time. The techs say "If we do X, we're going to see these problems.". Management berates the techs for being obstructive and standing in the way of doing X when the rest of the company's agreed on it. Then all the predicted problems occur, and management berates the techs again for allowing those problems to happen. PHBs in action.

Wednesday, October 1, 2008

a.s.r quote

"Is there any equipment out there that doesn't suck?"
"Yes. Vacuum cleaners."

Monday, September 22, 2008

Browser process models

Everyone's praising IE8 for it's new one-process-per-tab model. It's got many advantages over the threaded model used by most browsers, including the fact that a crash of one tab can't take other tabs or the browser down. What most people don't seem to get is that the multiple-process model isn't new with IE8, and in fact the threaded model was one adopted only over the strenuous objections of everybody else. You see, threads exist for one reason and one reason only: on VMS and Windows (which was designed in part by the guy responsible for designing VMS), creating a process is expensive. You need threads in those OSes because you can't create a lot of processes quickly. But Unix had cheap process creation from day one. You needed another thread in Unix, you forked off another process. Threads weren't needed. But everybody in the Windows community kept braying about needing threads and why didn't Unix have them, oblivious to the fact that they already had what threads did in the fork() call. So Unix finally, reluctantly, adopted threads with all their pitfalls. And programmers used them heavily when processes would've been more appropriate. Until the pitfalls finally became too much to live with, when they went from just driving programmers nuts to causing problems for the very users who demanded them in the first place. So now we're back to where Unix was 25 years ago, forking a new process when you need a new thread of execution.

Thursday, September 18, 2008

Cooling a data center without AC

http://weblog.infoworld.com/sustainableit/archives/2008/09/intel_air_side.html

Intel has done a test, cooling a high-load data center model without using air conditioning, just ambient outside air. They did see a higher failure rate on the equipment, but not nearly as much higher as was expected. And the portion that used a simple, cheap cooling system without all the climate control of a true data-center AC unit had a lower server failure rate than full-on data-center-class AC yielded. My feeling is that it's not the cold air that's critical to data-center health, the servers will be entirely happy with 80-90F cooling air. It's mostly the dust and to a lesser degree the order-of-magnitude changes in humidity that cause most of the failures. Scrub the dust from the air (even cheap AC units do this, and simple filter systems can do it too) and keep the humidity in the 5-35% range (no need to control it to +/- 2%) and you'll give the servers 95% of what they want to be happy. And you can do that for a fraction the cost of a full climate-control system that controls everything to within 1%.

Tuesday, September 16, 2008

Need to update site

I really need to start updating Silverglass (the Web site) again. It used to have a lot of pages about the nitty-gritty of configuring RedHat Linux. I've long since switched to Debian, and the configuration tools have gotten a lot better since the days when getting e-mail address masquerading working involved manually editing sendmail.mc, but there's still things I'd like to document. Setting up the zone files for a DNS server, for example, and configuring a DNS "blackhole" list (a set of DNS domains you want to make not exist anymore, eg. DoubleClick's domains, so that Web advertisements and such just vanish). And some things haven't changed, setting up a firewall for example, and details of Debian's startup scripts and how to play nice with them when writing your own is still useful information. And of course there's rants I want to write but haven't, the originals would be on my blogs of course but copies can go on the Web site. Then there's code projects like the LJBackup program I can finish up and drop on the site.

I just need to get time and energy together and actually do it.

Monday, September 15, 2008

Hubble Space Telescope discovers unidentified object

The HST has spotted an unidentified object. (Here's a more serious article.) It's not similar to any type of object previously seen, and it's spectrum doesn't match that of any known object or type of object. There wasn't anything in that part of space before it blipped in, and there's nothing obvious there now that it's faded. So what is it? Nobody's got any good ideas yet.

"The most exciting phrase in science, the one most likely to herald a new discovery, is rarely "Eureka!". More often it's "That's funny. It's not supposed to do that, is it?"."

Wednesday, September 10, 2008

Large Hadron Collider

They've fired up the LHC today for the first tests. A lot of people have been making noises about the risk of it creating a black hole that'll swallow Earth. I'm sorry, the world won't be ending today.
  1. Yes, the LHC will be colliding particles with higher energies than anything humans have been able to manage to date. But the Universe isn't human, and has been colliding particles with energies orders of magnitude higher than the LHC's capable of for billions of years. Several such collisions happen here on Earth each year as high-energy gamma rays impact the atmosphere. Given the length of time and the number of events per year, if those collisions would create a black hole that'd last any length of time we'd've seen evidence of it happening before now.
  2. Any black hole the LHC might create will have only the mass of the particles involved in the collision. That's only going to be a couple of protons worth. Such low-mass black holes emit a large amount of Hawking radiation relative to their mass. And that emitted radiation comes from their mass. Low-mass black holes simply evaporate very quickly (within fractions of a second) after forming. So even if a black hole does get created, it'll disappear again probably before we even know it existed.
  3. Even if the black hole sticks around, it won't pull in enough to be a problem. Remember, black holes don't have any greater gravitational pull than anything else of the same mass, their only special property is how steep their gravity well is. The danger radius of such a low-mass black hole is going to be a fraction the size of a subatomic particle. It's going to have to hit another subatomic particle almost dead-center to suck in any additional mass. At this scale "solid' matter's 99.999% empty, so it's going to take that black hole a very long time (as in millions of years) to accumulate enough mass to begin to start pulling in matter on a macroscopic scale.
  4. Ignoring the above, the black hole will have the same velocity vector and momentum as the particles that created it. That velocity'll be well above the Earth's escape velocity, and tangent to it's surface. Any black hole will simply continue in a straight line away from Earth, never to return.
  5. And even ignoring the previous points, this is the test phase. They've only turned on one beam to calibrate and align things. With only one beam, there aren't going to be any particles colliding. So even if the LHC could create black holes, it won't be creating them today.
So, I'm sorry, but the crowbar-wielding Gordon Freeman won't be getting any screen time because of the LHC.

Thursday, August 28, 2008

Abit leaving the motherboard market

According to this article, Abit is exiting the motherboard market by the end of this year. Abit's been my preferred motherboard brand for a long time now, with the right mix of interfaces and options for what I want. Now I've got to find another brand that has models with the right mix and a history of reliability. Annoying.

Saturday, August 23, 2008

aseigo on new MS/Novell deal

http://aseigo.blogspot.com/2008/08/microsoft-and-novell-reaffirm-pact.html

aseigo has some comments on MS's new deal with Novell to buy more Linux support coupons. I have to agree with him. One thing that has bothered me with MS's activites is their nebulous claims about their IP that's supposedly infringed upon by Linux. My first reaction is "I'm from Missouri. Show me.". Exactly what intellectual property does Microsoft claim to own that's being infringed upon, and exactly what in a Linux distribution infringes upon it and how? Lay it out and let's get it resolved. And yet Microsoft won't do that. They play coy, dodging around saying exactly what it is they're accusing Linux of. And my immediate reaction to that is to think that they really don't have any claim that'll stand up to public scrutiny, that if they had to actually lay it out all they'd end up with is "We got nuthin'.". And that makes me immediately suspicious of any deal that supports them in this. When someone's running a scam (which is what a false claim to get others to pay you is, a scam), there's only two kinds of people doing business with them: marks, and accomplices. I probably want to avoid both.

Thursday, August 21, 2008

DMCA: copyright owners must consider fair use

Copyright owners must consider fair use before filing a DMCA takedown notice. The full decision is here. The basic upshot of this is that copyright owners are required to consider whether a use of their material would reasonably be considered fair use under copyright law. The DMCA requires that the copyright owner have a good-faith belief that the use is infringing before they can file a takedown notice, and if the use falls under fair use and a reasonable person would have concluded this beforehand then the "good-faith belief" test fails. That, BTW, leaves the copyright liable for damages and penalties if the target of the notice wants to push it. The downside, of course, is that showing bad faith is a difficult thing to do in court, but still it's nice to have the principle upheld.

The judge says he's not sanguine about the defendant's chances of proving bad faith on the part of the plaintiff. I'm not so sure, at least if the judge is unbiased about it. The infringement in question is a song playing in the background of a baby video posted to YouTube. The Supreme Court has set forth 4 factors to consider in determining fair use: the nature of the use (commercial vs. non-commercial), the nature of the infringed work, the amount and substantiality of the portion used and the effect of the infringement on the potential market for the work. It's going to be very hard for a record label to argue that people are going to put up with watching someone's baby video repeatedly just to save the cost of buying the song. They're also going to have a hard time arguing commercial use, YouTube may put ads on the page but the uploader doesn't get any money from them and has no control over them and the entity that does get the money (YouTube) isn't the one the plaintiff's making a claim against. Even the nature of the copyrighted work works against the label. The work is a song, and it's merely incidental background noise in a video whose point is to showcase the uploader's baby. The only factor that works anywhere near in the plaintiff's favor is the amount of the song audible, and that's countered by the fact that the song's purely incidental background. As I said, it's not likely anyone's going to look at this video mainly for the music, any more than anyone watches a football game mainly to see the advertisements pasted around the stadium. Given all that, if the defendant's got a good lawyer I think they can make a very strong case that plaintiffs couldn't reasonably have believed the use wouldn't meet the qualifications for fair use. And proceeding when you know or should know otherwise is the very definition of bad faith.

Tuesday, August 19, 2008

Google session vulnerability

At DefCon there was a presentation on a way to hijack a Google Mail session. Google's implemented a new option to counter it, the option to always use SSL. Now, important point: the attack is not the one that sniffs your session cookie if you're using an unencrypted link. That attack can be prevented merely by using SSL all the time. This attack will work even if you use SSL for everything. It works by inserting code on a non-GMail page that'll cause a request to the non-SSL GMail pages, and the browser will send the session cookie in that unencrypted request without you being aware of it. When you use Google's fix, setting your account to always use HTTPS, Google does more than just force you to an "https:" URL. It also wipes your existing session cookie and creates a new one with a flag on it to tell the browser to only send this cookie in secure (HTTPS) requests. This prevents the cookie from being sent in the clear ever.

Wednesday, August 13, 2008

Law & Life: Silicon Valley: Major Victory for Open Source in Jacobsen Decision

Law & Life: Silicon Valley: Major Victory for Open Source in Jacobsen Decision

Artistic License is a copyright license after all

In the Jacobsen v. Katzer case, the trial court had ruled that the Artistic License (the open-source license under which the software involved was distributed) was a contract, not a copyright license. The Appeals Court for the Federal Circuit has overturned that ruling. The case is convoluted, because it originates not out of a copyright dispute but out of a patent issue. The copyright aspect came up out of the patent portion of the case. But it's good news nonetheless for open-source software. One of the standard arguments by open-source detractors is that the GPL and similar licenses are just contracts, subject to the vagaries of contract law, and violations of them have to be pursued as contract breaches. Now it's possible to hold up this ruling and say to them "The US Appeals Court disagrees with you.". Among other things this affects are the ability to recover costs. In a standard breach-of-contract suit the plaintiff, even if they win, is expected to bear their own costs except in unusual circumstances. In copyright-infringement actions, though, the law grants the prevailing party a much greater right to recover their costs and legal fees. This makes it easier for open-source authors to find lawyers willing to help them with copyright enforcement.

Monday, August 11, 2008

Credit-card system

You know, we need a change to the way credit-card purchases are handled. Card-present transactions, ones where you're physically there with the card to swipe, are OK. But when the card's not present, we need a change. Currently the system works by the merchant pulling money from your account. We need to change it so the card-holder pushes the payment to the merchant. That would eliminate the whole need for the merchant to store credit-card information, and eliminate a bunch of fraud in the process.

How would it work? Well, for a one-shot payment (your standard on-line purchase), check-out would proceed as normal except that when you told it you'd pay by credit card it wouldn't prompt for the card number. When you got to the confirmation page, it'd give you a merchant identity code and a transaction number. You'd then go to your credit-card issuer's Web site, log in and use those two numbers to generate a payment to the merchant. You'd of course verify that the merchant's identity code gave you the expected merchant name. You'd make the payment for exactly the amount the merchant gave as the total, and your card issuer would charge your card and transmit the payment to the merchant. The merchant could match the transaction number they got along with the payment with their order records, and ship your order only once they'd received your payment. The merchant's account would be solely for receiving money, nothing could be pulled out of it, so it'd be impossible to steal from the merchant. Nobody who knew your card number and other information could run a transaction, regardless of how much they knew, unless they also had the password for your account at the issuer and could log in as you to generate the payment. It'd be impossible for merchants to make unexpected charges to your card. And if the merchant claimed you hadn't sent the payment, you'd have your bank/issuer's record of the merchant accepting the payment as proof you had. This could all piggy-back on the bill-payment systems a lot of banks already have in place.

For recurring payments, it'd work two ways. For payments where the amount's known, the merchant could give you a customer identifier to use as the transaction number. Then you could simply set up an automatic recurring payment for that amount with your bank. For payments where the amount wasn't known beforehand (eg. utility bills), a back-channel could be provided where you give the merchant your card number or other bank-provided customer identifier and the merchant can send a payment request to your bank using that identifier and providing the payment amount and a transaction number. That'd go into a payment-request list you could view, and you could generate payments to the merchant directly from that list. These payment requests could even be used for non-recurring charges too, with a checkbox in the payment-information step to indicate whether you wanted the merchant to generate a payment request or not and a way to give the merchant your customer identifier. For full auto-pilot operation, the bank might let you flag requests from certain merchants for auto-approval, preferrably with a limit on the payment amount (eg. if your electric bill was normally $45-55 you might put a limit of $75 on auto-approved payments, with anything above that requiring manual approval) and timeframe (eg. auto-approve the utility bills for the next 2 months while you're possibly on vacation). Of course auto-approval removes a lot of the protection from fraudulent and unauthorized charges.

For people without Web access, it still works. They obviously won't be buying on-line, not when they can't get to Web sites at all, so the impact's mainly to mail-order and telephone purchases. Payment authorization can be added to ATMs easily enough. It can probably be added to telephone banking systems, although it's easier with voice-recognition systems than with ones that depend on the touch-tone keypad to enter information. And of course it could be done by a teller at a bank branch. In the worse case, a simple interface to turn auto-approval on for payment requests from merchants you needed to pay would turn the system back into the traditional pull-payment system.

California IP and non-compete law

As a follow-up to the last post about non-competes, I thought I'd repost links to the relevant California codes on intellectual-property and non-compete agreements:
Anyone in the tech field in California should be familiar with these, because tech companies routinely put terms in their employment agreements that exceed what these laws allow. I made sure, when I signed my intellectual-property agreement, to add a notation referencing the limitations in 2870-2872 and making my acceptance limited to only what was allowed by those sections of the law.

Friday, August 8, 2008

Non-compete agreement? Not in California.

The California Supreme Court has ruled non-compete agreements illegal except in a very few circumstances. The law allows for them explicitly in cases involving the break-up of a corporation or partnership, but beyond those exceptions written into the law the Court ruled that the law simply prohibits an employer from restricting a former employee's right to engage in their profession. The full ruling is here. Given that it's the California Supreme Court ruling on this, Federal courts are likely to follow this ruling when interpreting California employment law. So if you work in California and your company had you sign a non-compete clause, it's out the window now.

Note that this doesn't mean you can do anything you want. If you got training at the company's expense, for instance, the clause that says you must either stay a certain length of time or re-pay the cost of the training (probably pro-rated) is still enforceable. If you do something like take company confidential information (eg. software source code, customer lists, etc.) and give it to your new employer, your former employer has grounds other than non-compete they can sue you on. And if you're a salesman and openly solicit your former company's customers to follow you to your new one, your former company again can sue you for that.

Monday, August 4, 2008

Telnet

Telnet: a useful program for determining whether a remote system is accepting connections on a given port, and for issuing direct commands to SMTP, HTTP and other similar services. It's use as a terminal emulation program is Not Recommended.

Tuesday, July 29, 2008

DNS vulnerability

There's a few things that need to be done to completely fix the DNS cache-poisoning vulnerability Dan Kaminsky discovered.

First, filter additional response data (glue records) aggressively. In delegation responses, the only acceptable glue should be A records for the names given in the responsive NS records. In non-delegation responses only additional records for the exact name being queried should be accepted, records for other names should be discarded. If you're going to cache additional records, only records passing this filter should be cached. Ideally no additional records should be cached.

Second, implement DNSSEC across the board. It shouldn't be that hard, it just requires people to do the work. Signed data makes it impossible for an attacker to successfully get forged responses accepted (barring someone breaking the major public-key encryption algorithms).

Third, network operators near the edge of the network should implement ingress/egress filtering and require it of networks connecting to them. Towards the backbone there's too many netblocks on each interface to filter, but at the edges it's feasible to identify all the netblocks that should be sending packets across a given link. No network should ever permit a packet to go upstream unless it's source address is in a netblock belonging to that network or to a downstream network. No network should accept a packet from a downstream network unless it's sourced from a netblock attached downstream of that interface. That makes forging the source address (needed for the DNS cache-poisoning attack) nearly impossible.