Thursday, February 26, 2009

The Kindle 2 and newspapers

Thinking about the Kindle 2, it may be the salvation of newspapers. A lot of the cost of newspapers is in the printing: the paper, the ink, the presses, the cost of distributing the sheer physical mass of paper. The Kindle 2 provides a secure subscription-based channel for delivering black-and-white printed content that doesn't require moving physical material around. Amazon already has a content distribution network set up. A newspaper could mark up their edition electronically and distribute it to Kindles. As long as nobody involved gets greedy, I think it could be profitable once the costs of physical printing and distribution are shed.

Tuesday, February 17, 2009

Verizon using mail submission port 587

Verizon is moving to using port 587 for mail submission, requiring encryption and authentication to send mail. That alone won't stop the spam originating from their networks, but it's a start. My thought is that there should be 3 ports for 3 different purposes:
  • Port 25, no encryption or authentication required, is for server-to-server mail transfer. Relaying shouldn't be allowed, all e-mail arriving should be addressed to an in-network domain. Anything else should be rejected. This means no relaying. Messages should not be modified except for adding an appropriate Received header.
  • Port 587, encryption and authentication required, is for end-user mail submission only. Mail submitted to it should have the Sender header stripped and replaced with one based on the authenticated username.
  • Port 465, encryption required and authentication allowed, is a hybrid. If the session isn't authenticated, it should act per the rules for port 25. Authenticated sessions should be allowed to relay. If relaying, authentication information should be added to the Received header and if no Sender header is present one should be added based on the authentication information. Messages should not be otherwise altered.
One thing many ISPs ignore (often, I suspect, willfully) is customers who do not use their ISP as their mail provider. I'm an example. I get my Internet connection from Cox, but XMission in Utah host my domain and handle my e-mail.

The Pirate Bay trial

Apparently half the charges against The Pirate Bay have been dropped by the prosecution. This isn't based on a technicality, as I read it, but on such basic things as the screenshots the prosecution was using as evidence the client was connected to the TPB tracker clearly saying it was not connected to the tracker. It's no wonder the prosecution dropped those charges rather than continue. If they'd've continued, the defense would've introduced the prosecution's own screenshots and the prosecutor wouldn't've been able to rebut them.

I don't particularly agree with piracy, but when the prosecutors screw up this badly they deserve to lose.

Thursday, February 12, 2009

Code comments

Motivated by a Daily WTF entry on code comments. I know exactly what creates this: Comp Sci instructors. You know them, the ones who insist that every line of code be commented, no matter how trivial. After a couple of years of this, students get in the habit of including comments just to have a comment for that line of code. Of course the easy way to do this is to just restate what the line of code does.

Now, as a professional programmer doing maintenance on code I don't need to know what the code does. I can read that line of code and see exactly what it does. I need something a bit higher-level. I need to know what the code's intended to do, and I need to know why it's doing it and why that way of doing it was selected. I know it's interating down a list looking for an entry, I need to know what that list is for and why the code's looking for that particular entry. Instead of comments describing how to iterate through a list, I need a block comment saying something like "We've got our list of orders entered this week, we know they're ordered by vendor, and we're trying to find the first order for a particular vendor so we can extract all his orders quickly.". Probably followed by something like "Now that we have this vendor's first order, we'll grab everything until we see the vendor number change. When we see that, we've got all his orders.". Much shorter, wouldn't satisfy that instructor at all since most of the code won't be commented, but much much more useful when I have to change things. It tells me what the code's trying to do, and what assumptions it's making that I have to avoid invalidating.

Too many Comp Sci teachers need some real-life experience maintaining the kind of code they exhort their students to produce.

Tuesday, February 3, 2009

Powerful and easy to use

"You'll find this system to be incredibly flexible and powerful. You'll also find it to be incredibly simple and easy to use with minimal training. However, don't ask it to be both at the same time."

Monday, January 19, 2009

Security vulnerabilities and disclosure

Triggered by this SecurityFocus column. I've always been a proponent of full disclosure: releasing not just a general description of a vulnerability but the details on how it works and how to exploit it. I considered that neccesary because vendors are prone to saying "There's no practical exploit there." and stalling on fixing it, and the only way to prove there is a practical exploit is to actually produce the code to exploit the vulnerability. It also removes any question about whether you're right or wrong about the vulnerability. There's the code, anybody can verify your claims for themselves. But I've also always been a proponent of telling the vendor first, and giving them the opportunity to close the hole themselves before the rest of the world gets the details. General public disclosure was, in my view, the last resort, the stick to wave at the vendor that you'd employ only if they didn't act with reasonable dispatch to actually fix the problem.

But, as this column points out, these days the vendor's most likely to respond not by trying to fix the problem but by hauling you into court to try and silence or even jail you for having the temerity to tell them they've got a problem in thier software. Which is leading me to believe that responsible disclosure, while preferrable, simply isn't viable anymore. The only safe thing to do, the only effective way to get vendors to respond to problems, is to dump all the details including working exploit code out into public view so the vendor can't ignore it, and to do it anonymously (making sure to cover your tracks thoroughly and leave no trail leading back to you) so the vendor doesn't have a target to go after. That's the only way to avoid months if not years of legal hassles and courtroom appearances, all for you having the temerity to try and tell the vendor privately that they had a problem. IMO this is a sad state of affairs, but it also seems to be the way the vendors want it to be.

It's either this, or fight for court rulings saying that vendors have no legal right to hound researchers who try to disclose privately to the vendor. In fact, we need legal rulings saying that a vendor who tries to silence the reporters of a vulnerability instead of fixing the vulnerability make themselves legally liable for the results of that vulnerability. Short of that, researchers have to protect themselves.

Tuesday, January 13, 2009

Sysadmin advice

Really good advice:
http://www.bynkii.com/archives/2009/01/for_new_sysadminsit_types.html

Meece

Grumble. The trusty old Microsoft Trackball Optical I've been using at work is starting to go. The optics work fine, but the ball is starting to stick and not want to roll easily. I've no idea what's causing it or how to correct it. It's done it and then cleared up a couple of times before, but it's happening more often and not clearing up as readily each time. So now I have to go get a replacement. Microsoft doesn't make this model of trackball anymore, the Logitech thumb-operated trackballs are all too narrow for my hand and the finger-operated trackballs I just can't get used to. So I guess it's back to a laser mouse for me.

Monday, January 12, 2009

Mass transit

The major problem with mass transit is, frankly, that it's inconvenient for the things people commonly need to do. Stuff like shopping, or quick runs to random places. It's hard to bring anything back, let alone large items like a television or a full load of groceries for a family, and usually the busses and trains take twice as long to get there as a car would even after allowing for traffic snarls. I don't see a fix for this as long as mass transit is designed around large-capacity transports running on fixed routes on a fixed schedule. What we need is a completely different design, which will require a street network designed to accomodate it.

First, the basic local unit is a transit pod running in a dedicated guideway. Stops are cut-outs where the pods can get out of the traffic flow. Pods would be in 3 sizes to cut down on the number of varieties needed. 2-seat pods are designed to hold just people, no cargo, to provide a physically small unit for getting individuals from point A to point B when they don't need to carry much more than a backpack or briefcase. Larger pods are 2-row and 4-row versions, with the rear rows designed to fold flat into the floor to convert seating into cargo deck as needed for that particular trip. These don't run on fixed schedules or routes, people call them to a stop as needed based on how many people are in their group and how much cargo they expect to have and pick the destination once they're in. Pods are routed automatically by the shortest, least-congested path. Guideways don't have to run on every single street, but they should run on enough that it's never more than half a block from any house to a pod stop. For instance, in a residential neighborhood the guideways might run on every east-west street so you have to walk no more than half a block north or south to a guideway. The preference, though, would be to have a guideway on every street so pods can stop literally at your driveway. With this kind of routing, you avoid the waits to change lines that're typical of conventional bus and train systems.

Pods would operate in a large area, and in theory you can take a pod for the entirety of a trip anywhere within a sane distance, but for longer-distance travel inter-area trams would be used. These wouldn't run everywhere. They'd connect transit hubs, and be organized into lines much the way trains are currently. There would, however, be more interconnection than is typical of train lines, so you could take a direct route with less going out of your way and changing trains at a central station. I call them trams rather than trains because I'd design them using dedicated guideways like the pods rather than rails, so a tram could at a hub choose between multiple ways out. That way the system can dynamically allocate trams to routes at each hub to accomodate traffic. If you're going further than a few miles, you'd typically take a pod to the nearest hub and grab a tram to a hub near your destination. If you picked up cargo that couldn't be delivered, you'd take a pod the whole way back.

Using guideways also allows another trick: commercial pods could be designed that'd run on both pod and tram guideways. A store could, for instance, load up a delivery pod with loads for several customers in the same area and route it out (on a pod guideway to the nearest tram hub, then over the tram guideways to a hub near it's destination, and finally via pod guideways to a stop near the delivery address) to drop off deliveries for customers.

The major problem I see with implementing this is that you'd need to majorly disrupt the street network to build the guideways. You literally can't do this on top of the existing streets, you'd need to redesign the streets to accomodate the guideways (no more on-street parking, the guideways will be occupying that space) and have a whole new way to handle cars crossing the guideways without interfering with pod traffic (probably requiring traffic-control gates). IMO it'd be worth it once implemented, but the up-front cost of implementing it makes it a hard sell.

Monday, January 5, 2009

Programmers and undefined behavior

ISAGN for a sadistic C/C++ compiler to school the current generation of programmers in the dangers of relying on undefined behavior. Too many of them do things like assume that dereferencing a null pointer should cause a crash and core dump. The problem is that nothing says that. The C++ standard leaves the behavior in that case undefined, which means the code and possibly the compiler is free to do anything it wants to at that point. It doesn't even have to consistently do the same thing every time.

So, a sadistic compiler. At run time it'd check for various sorts of undefined behavior. When it detected them, eg. an attempt to use the result of dereferencing a null pointer (as by calling a method through a null object pointer), it'd branch to a routine that'd randomly select from a list of dangerous things to do, such as spewing the contents of /dev/random onto the console, kill -9ing a random process on the system, or zeroing all blocks on a random attached storage device. In addition, the compiler would check for any undefined behavior it could feasibly check for, and take similar actions when asked to compiler such code. Thus, trying to compile "x = a++ - a++;" might result in your system disk being wiped.

The goal: to impress upon programmers that you cannot rely on undefined behavior. At all. Not on what it does, not even that it does the same thing all the time. The only guarantee you have is that you won't like what'll happen. So avoid it like the plague, or pay the price.

Saturday, January 3, 2009

Zune lock-up bug

Owners of Microsoft's Zune MP3 player saw their devices lock up hard at the end of 2008. It turns out there's a leap-year bug in Microsoft's code. The clock in the Zune records time as the number of days and seconds since 1/1/1980. To convert that into a normal calendar time, the Zune starts with this code to convert the number of days to years and days:

year = ORIGINYEAR;
while (days > 365)
{
if (IsLeapYear(year))
{
if (days > 366)
{
days -= 366;
year += 1;
}
}
else
{
days -= 365;
year += 1;
}
}

It's basically looping through incrementing the year and decrementing days by the number of days in that year until it's got less than a full year's worth of days left. The problem comes on the last day of a leap year. In that case, days will be 366 and isLeapYear() will return true. The loop won't terminate because days is still greater than 365. But the leap-year path inside the loop won't decrement the days because days isn't greater than 366. End result: infinite loop on 12/31 of any leap year. This bug should've been caught during standard testing. Leap years are a well-known edge case when dealing with dates, likewise the first and last days of the year and the transition from one year to the next are standard problem points where any errors tend to show up.

Microsoft's proposed solution: wait until sufficiently far into 1/1 of the next year, then force a hard reset of your Zune. Yeah, that'll get your Zune working but it doesn't fix the bug. Bets that Microsoft's fix for this code causes a different kind of failure on 1/1/2013?

Friday, January 2, 2009

JournalSpace mistakes mirroring for backups, dies

JournalSpace is dead.

Short form: JournalSpace depended on drive mirroring for back-ups. Something proceeded to overwrite the drives with zeros, and the mirroring politely overwrote the mirrors with zeros too. Their entire database is gone, all blogs, everything.

Repeat after me: mirroring is not a back-up. RAID and drive mirroring are for reliability and fault-tolerance. They'll protect you against hardware failure. They won't protect you against software doing something stupid or malicious. If the software says "Write this data to this location on the disk.", mirroring software and RAID drivers won't, I repeat will not, not write the data. If you're depending on your mirrors to contain something other than exactly what the main drive contains, well, you'll end up where JournalSpace is. You need point-in-time backups to external media, something that won't duplicate what's on the main drive unless and until you do the duplication yourself. That's the only way to insure that, if your software writes corrupted data to your main disks, the backups don't have the same corruption written to them as long as you catch the problem before you overwrite the backups with new ones. This is also, BTW, why you have more than one set of backup media: so if you do run your backups before catching a problem, you've got older backups to fall back on.

This should also be a cautionary tale for anybody wanting to host their data, application or whatever "in the cloud". If you do, and it's at all important, make sure you a) can and do make your own local backups of everything and b) have a fall-back plan in the event "the cloud" suddenly becomes unavailable. Unless, of course, you want to end up like the people who had their journals on JournalSpace: everything gone, no way to recover, because of somebody else's screw-up.

Tuesday, December 16, 2008

My network layout

For those interested, this is how I lay out my network.

The center of the network is the gateway/router box. It's got three network interfaces on it. eth0 connects to the wired LAN, network 192.168.171.0/24. Off the gateway is an 8-port switch that handles the computer room, with a run out to the living room and a 5-port switch for the game console and my laptop when I'm out on the couch or I've run a cable outside to sit on the walkway. It's all 100-megabit Ethernet, I'm planning on upgrading to gigabit at some point. Outgoing connections/flows from this network are relatively unrestricted. The only blocks are for DNS, it's only permitted to the gateway box.

eth1 on the gateway connects to a 5-port switch where the wireless access points are attached on network 192.168.217.0/24. This network is considered untrusted, forwarding from it to other networks isn't permitted at the router so machines on it can only talk to each other or the router. In addition the router blocks incoming traffic on that interface except for DHCP and IPSec. This limits what a rogue machine on that network can do. The access points don't have WEP/WPA enabled, but they do do MAC filtering. When I upgrade to a faster AP I may enable WPA using authentication just to annoy the kiddies. The primary use for this network is to carry the IPSec VPN traffic on the 192.168.33.0/24 network. This network is considered trusted, and has the same outbound restrictions as the wired LAN segment. Forwarding between the wired LAN and VPN segments is unrestricted and un-NATed.

eth2 on the gateway is connected to the cable modem and gets it's address via DHCP from Cox. Traffic from the wired LAN and VPN going out this interface is NATed. Incoming new connections are generally blocked, with specific holes opened for connection to services on the gateway machine (ssh, Apache on a high port, the DHCP client). The outside world is considered untrusted. Outgoing traffic has a few restrictions to prevent un-NATed addresses from escaping.

Most security is based on limited physical access to the hard-wired network. The wireless network that can be reached from outside the apartment is treated the same as any network I don't control. Laptops on it should be firewalled as if operating on a hostile network, and use IPSec to make a VPN connection to the rest of my network. This avoids my having to rely on hardware I don't completely control for security.

Tuesday, December 9, 2008

DNSChanger malware

The Washington Post's security blog is reporting on the DNSChanger malware. This stuff isn't new. It does two things: changes your computer's DNS resolver settings so it uses DNS servers belonging to the bad guys instead of the servers your ISP provides, and activates a DHCP server so that other computers (say ones attaching to your wireless network) will take addresses and settings (including DNS server settings) from the malware instead of the legitimate server. The result is that everything you do, and everything those machines do, gets funneled through the bad guys' systems along the way. It's hard for security software to detect this, there's no mismatches and no spoofing to spot as a clue there's something wrong.

On my network, this won't work. The DHCP server part might, within limits. But my wireless access point is on a seperate physical network from the hard-wired machines and the firewall blocks the DHCP protocol between the networks. With the VPN active, that limits the damage the rogue DHCP server can do. And my firewall also blocks the DNS protocol at the edge of my network. While on my network you simply can't use any DNS server except the one I provide. If you try, your query packets will get rejected by the firewall when they try to leave my network. That means if you do get infected by DNSChanger, your DNS will simply stop working completely on my network until the problem's fixed. And my gateway machine, the one machine on my network that gets to do DNS queries to the outside world, doesn't run Windows, isn't used for Web browsing and isn't very susceptible to infection by malware.

Wednesday, December 3, 2008

BodyParts webapp

I finished the first pass at the BodyParts webapp that lets me view and maintain the Legends&Lore body parts inventory for my guild in EQ2. Security framework enabled, anonymous viewing and a login for editing, Tomcat set up properly, Apache configured to front for it. Now I can update the inventory from the in-game browser, no more writing things down on paper and windowing out to update text files.

Next things on the list:
  • Add add/subtract fields and buttons to the editing screen so that instead of having to do the math in my head I can just punch in the number of parts and hit Add or Subtract depending on whether I'm depositing or withdrawing parts.
  • Move the inventory from an XML data file into a real database table. I'll want to move the user information into a database table in the process and change authentication to match.
  • Reconfigure things to have all webapps in their own sub-tree so they don't sit directly at the root of the content tree. I'll have to see whether I want Apache to re-write URLs (so the user sees /webapp/BodyParts/ while Tomcat sees just /BodyParts/) or whether I want to make the change within Tomcat.
  • Change things in Eclipse to not depend on MyEclipse for the Spring framework and supporting libraries. I'm finding that as I get more familiar with things it's easier to edit the XML directly than to depend on MyEclipse's tools.
It's been a useful excercise so far.

Monday, December 1, 2008

MS claims in Vista lawsuit

Microsoft is currently embroiled in a lawsuit over Windows Vista. The plaintiffs there are claiming that Microsoft falsely advertised machines as Vista Capable when they weren't capable of running the Vista shown in advertisements. One of Microsoft's responses to this is that Vista Basic, which the machines are capable of running, is a real version of Vista and therefore there wasn't any false advertising. This isn't going to fly. The question isn't whether Vista Basic is a real version of Vista, it's whether it's the Vista advertised to everyone. And it isn't. It's missing almost all the elements shown in the advertisements, and nowhere in the ads does it say that what's shown may not be present. In fact all the ads emphasize all those elements shown as the things that make Vista Vista. That's going to kill Microsoft.

This isn't a software trademark or copyright case. This is straight-up consumer law that's been around almost as long as car salesmen have been. If you advertise a car on TV and show the supercharged V8, 6-speed manual transmission, sports suspension, full leather interior, full power everything with sunroof version, and that's all you show, and you end the add with "Starting from $9000.", then courts have held that you need to deliver the car as advertised starting at $9000. If the only model you offer at $9000 is the 4-cylinder, 4-speed automatic, cheap cloth interior, manual everything stripped-down model, the courts will nail you for false advertising. You did the advertisement knowing you couldn't and wouldn't deliver what was shown for the price you quoted, and you don't get to do that. Which is why every car advertisement, when they show the price, always says in readable text underneath the "Starting at" stuff something like "Base price. As shown, $LARGER_NUMBER.". And Microsoft didn't do the equivalent. They showed Vista with the Aero interface, emphasized the Aero interface as a defining characteristic of Vista, and never gave a hint in the ads that Vista might not actually contain the Aero interface. A reasonable person, looking at the ads and the "Vista Capable" sticker, would conclude that that sticker meant the machine was capable of running what Microsoft was advertising as Vista. And it can't. And all those e-mails from Microsoft execs show that Microsoft knew it. Bam. Stick a fork in 'em, they're done. They can wriggle all they want, but when it comes down to it they're going to lose on that point for the same reason car dealers lost and had to start adding that explanatory text.

Friday, November 28, 2008

The Srizbi botnet

After being taken down when McColo was shut down, then resurrecting itself, the Srizbi botnet has been taken down once again. It won't last, though, the botnet will start searching domain names trying to reestablish contact with it's command-and-control network, and it's operators will probably succeed We need new tactics:
  1. Instead of trying to block the botnet when it searches for a new C&C network, intercept it. Put something up at the next few domains it'll try that will respond correctly and take control of the botnet. Once you've got control, use that control to have the botnet download a new module that'll remove the botnet software from the computer, then shut down the computer until the user reinstalls Windows.
  2. Start sanctioning users who allow their computers to become infected. Right now there's no significant penalty assessed against the people who continue to allow their computers to be used by the botnet month after month after month. Start penalizing them. If they get infected, their Internet access gets suspended until they can provide evidence to their ISP that they've cleaned their machine up. Second offense, they get a 3-month suspension. Third offense, they get permanently disconnected. It's the Information Superhighway, and just like the real road system if you accumulate too many points we take away your driver's license.
Keeping a computer secure takes some effort and thought, especially if you're running Windows which was designed to be vulnerable. If there aren't noticeable penalties for being insecure, users just won't put forth that effort.

Friday, November 21, 2008

Spam volume

Early last week McColo was shut down. They provided hosting to nearly half the spam-related community. Some people were saying it wouldn't make a difference, the spammers would just move to different hosts and spam would pick up again. Well, according to SpamCop's statistics, spam hasn't even started to return to it's previous levels yet. You can see the near-vertical drop on the 11th when McColo was cut off, and peak levels since then have held pretty steady. I think one of the reasons is that other hosts looked at what happened to McColo and said "We don't want that happening to us.". I mean, it was pretty major: all of McColo's upstream providers simply pulled the plug on them, terminated the contracts and turned off the interconnect ports. When a spammer who got caught in that comes to another hosting provider, that provider's got to look at the potential down-side of accepting the spammer: complete and total loss of all their business. And they can't say "Oh, that'll never happen.", because McColo is staring them in the face saying "Yes, it will.".

This is, frankly, what we need more of: providers who serve the spammers facing a credible threat of being cut off from the Net if they don't do something about the problem on their network. For the provider it's a question of money, and the best way to change their behavior is to change the cost-benefit equation so the cost of hosting a spammer is higher than the benefit from them.

Wednesday, November 5, 2008

BodyParts webapp

Slowly but surely I'm getting my head around writing a web app using the Spring framework. It's requiring a fair amount of work, but it's much easier to understand when I'm actually writing code and seeing it work (or, more often, fail miserably). I need to get error handling working, then adding a new species, and then adding the security bits to allow some users to edit and others to only view the data. Once I'm done I'll not only have a handle on Spring, I'll have a way to edit my guild's body-parts database while I'm in-game.

Wednesday, October 29, 2008

Projects I need to do

  • Clean up LJBackup. It's been sitting on the back burner for way too long.
  • Take the EQ2 body-part inventory system I've got, that's currently based on a hand-maintained CSV file, and convert it into a Spring MVC web app. For low-volume stuff I can run it under Tomcat on an odd port on my gateway machine. That'll let me update the inventory through the in-game browser while I'm actually moving body parts in and out. It'll also be a useful show-off project if I want to move into Java and/or web-app work.
  • Create a utility to back up Blogger blogs, both entries and comments, locally. This should be easier than LJBackup.
  • Multiple new computers. I've got the guts for a new gateway box (obsolete AMD dual-core Socket 939), just need a case for it. I can turn the existing machines into two boxes sufficient for the kids. That just leaves main/gaming systems.
  • I need to scheme some way of getting a hosted server and the time to configure and maintain it correctly. I want to handle e-mail, Web server, DNS and such myself and drop the XMission hosting.