Triggered by this SecurityFocus column. I've always been a proponent of full disclosure: releasing not just a general description of a vulnerability but the details on how it works and how to exploit it. I considered that neccesary because vendors are prone to saying "There's no practical exploit there." and stalling on fixing it, and the only way to prove there is a practical exploit is to actually produce the code to exploit the vulnerability. It also removes any question about whether you're right or wrong about the vulnerability. There's the code, anybody can verify your claims for themselves. But I've also always been a proponent of telling the vendor first, and giving them the opportunity to close the hole themselves before the rest of the world gets the details. General public disclosure was, in my view, the last resort, the stick to wave at the vendor that you'd employ only if they didn't act with reasonable dispatch to actually fix the problem.
But, as this column points out, these days the vendor's most likely to respond not by trying to fix the problem but by hauling you into court to try and silence or even jail you for having the temerity to tell them they've got a problem in thier software. Which is leading me to believe that responsible disclosure, while preferrable, simply isn't viable anymore. The only safe thing to do, the only effective way to get vendors to respond to problems, is to dump all the details including working exploit code out into public view so the vendor can't ignore it, and to do it anonymously (making sure to cover your tracks thoroughly and leave no trail leading back to you) so the vendor doesn't have a target to go after. That's the only way to avoid months if not years of legal hassles and courtroom appearances, all for you having the temerity to try and tell the vendor privately that they had a problem. IMO this is a sad state of affairs, but it also seems to be the way the vendors want it to be.
It's either this, or fight for court rulings saying that vendors have no legal right to hound researchers who try to disclose privately to the vendor. In fact, we need legal rulings saying that a vendor who tries to silence the reporters of a vulnerability instead of fixing the vulnerability make themselves legally liable for the results of that vulnerability. Short of that, researchers have to protect themselves.
Monday, January 19, 2009
Tuesday, January 13, 2009
Meece
Grumble. The trusty old Microsoft Trackball Optical I've been using at work is starting to go. The optics work fine, but the ball is starting to stick and not want to roll easily. I've no idea what's causing it or how to correct it. It's done it and then cleared up a couple of times before, but it's happening more often and not clearing up as readily each time. So now I have to go get a replacement. Microsoft doesn't make this model of trackball anymore, the Logitech thumb-operated trackballs are all too narrow for my hand and the finger-operated trackballs I just can't get used to. So I guess it's back to a laser mouse for me.
Monday, January 12, 2009
Mass transit
The major problem with mass transit is, frankly, that it's inconvenient for the things people commonly need to do. Stuff like shopping, or quick runs to random places. It's hard to bring anything back, let alone large items like a television or a full load of groceries for a family, and usually the busses and trains take twice as long to get there as a car would even after allowing for traffic snarls. I don't see a fix for this as long as mass transit is designed around large-capacity transports running on fixed routes on a fixed schedule. What we need is a completely different design, which will require a street network designed to accomodate it.
First, the basic local unit is a transit pod running in a dedicated guideway. Stops are cut-outs where the pods can get out of the traffic flow. Pods would be in 3 sizes to cut down on the number of varieties needed. 2-seat pods are designed to hold just people, no cargo, to provide a physically small unit for getting individuals from point A to point B when they don't need to carry much more than a backpack or briefcase. Larger pods are 2-row and 4-row versions, with the rear rows designed to fold flat into the floor to convert seating into cargo deck as needed for that particular trip. These don't run on fixed schedules or routes, people call them to a stop as needed based on how many people are in their group and how much cargo they expect to have and pick the destination once they're in. Pods are routed automatically by the shortest, least-congested path. Guideways don't have to run on every single street, but they should run on enough that it's never more than half a block from any house to a pod stop. For instance, in a residential neighborhood the guideways might run on every east-west street so you have to walk no more than half a block north or south to a guideway. The preference, though, would be to have a guideway on every street so pods can stop literally at your driveway. With this kind of routing, you avoid the waits to change lines that're typical of conventional bus and train systems.
Pods would operate in a large area, and in theory you can take a pod for the entirety of a trip anywhere within a sane distance, but for longer-distance travel inter-area trams would be used. These wouldn't run everywhere. They'd connect transit hubs, and be organized into lines much the way trains are currently. There would, however, be more interconnection than is typical of train lines, so you could take a direct route with less going out of your way and changing trains at a central station. I call them trams rather than trains because I'd design them using dedicated guideways like the pods rather than rails, so a tram could at a hub choose between multiple ways out. That way the system can dynamically allocate trams to routes at each hub to accomodate traffic. If you're going further than a few miles, you'd typically take a pod to the nearest hub and grab a tram to a hub near your destination. If you picked up cargo that couldn't be delivered, you'd take a pod the whole way back.
Using guideways also allows another trick: commercial pods could be designed that'd run on both pod and tram guideways. A store could, for instance, load up a delivery pod with loads for several customers in the same area and route it out (on a pod guideway to the nearest tram hub, then over the tram guideways to a hub near it's destination, and finally via pod guideways to a stop near the delivery address) to drop off deliveries for customers.
The major problem I see with implementing this is that you'd need to majorly disrupt the street network to build the guideways. You literally can't do this on top of the existing streets, you'd need to redesign the streets to accomodate the guideways (no more on-street parking, the guideways will be occupying that space) and have a whole new way to handle cars crossing the guideways without interfering with pod traffic (probably requiring traffic-control gates). IMO it'd be worth it once implemented, but the up-front cost of implementing it makes it a hard sell.
First, the basic local unit is a transit pod running in a dedicated guideway. Stops are cut-outs where the pods can get out of the traffic flow. Pods would be in 3 sizes to cut down on the number of varieties needed. 2-seat pods are designed to hold just people, no cargo, to provide a physically small unit for getting individuals from point A to point B when they don't need to carry much more than a backpack or briefcase. Larger pods are 2-row and 4-row versions, with the rear rows designed to fold flat into the floor to convert seating into cargo deck as needed for that particular trip. These don't run on fixed schedules or routes, people call them to a stop as needed based on how many people are in their group and how much cargo they expect to have and pick the destination once they're in. Pods are routed automatically by the shortest, least-congested path. Guideways don't have to run on every single street, but they should run on enough that it's never more than half a block from any house to a pod stop. For instance, in a residential neighborhood the guideways might run on every east-west street so you have to walk no more than half a block north or south to a guideway. The preference, though, would be to have a guideway on every street so pods can stop literally at your driveway. With this kind of routing, you avoid the waits to change lines that're typical of conventional bus and train systems.
Pods would operate in a large area, and in theory you can take a pod for the entirety of a trip anywhere within a sane distance, but for longer-distance travel inter-area trams would be used. These wouldn't run everywhere. They'd connect transit hubs, and be organized into lines much the way trains are currently. There would, however, be more interconnection than is typical of train lines, so you could take a direct route with less going out of your way and changing trains at a central station. I call them trams rather than trains because I'd design them using dedicated guideways like the pods rather than rails, so a tram could at a hub choose between multiple ways out. That way the system can dynamically allocate trams to routes at each hub to accomodate traffic. If you're going further than a few miles, you'd typically take a pod to the nearest hub and grab a tram to a hub near your destination. If you picked up cargo that couldn't be delivered, you'd take a pod the whole way back.
Using guideways also allows another trick: commercial pods could be designed that'd run on both pod and tram guideways. A store could, for instance, load up a delivery pod with loads for several customers in the same area and route it out (on a pod guideway to the nearest tram hub, then over the tram guideways to a hub near it's destination, and finally via pod guideways to a stop near the delivery address) to drop off deliveries for customers.
The major problem I see with implementing this is that you'd need to majorly disrupt the street network to build the guideways. You literally can't do this on top of the existing streets, you'd need to redesign the streets to accomodate the guideways (no more on-street parking, the guideways will be occupying that space) and have a whole new way to handle cars crossing the guideways without interfering with pod traffic (probably requiring traffic-control gates). IMO it'd be worth it once implemented, but the up-front cost of implementing it makes it a hard sell.
Labels:
public transit
Monday, January 5, 2009
Programmers and undefined behavior
ISAGN for a sadistic C/C++ compiler to school the current generation of programmers in the dangers of relying on undefined behavior. Too many of them do things like assume that dereferencing a null pointer should cause a crash and core dump. The problem is that nothing says that. The C++ standard leaves the behavior in that case undefined, which means the code and possibly the compiler is free to do anything it wants to at that point. It doesn't even have to consistently do the same thing every time.
So, a sadistic compiler. At run time it'd check for various sorts of undefined behavior. When it detected them, eg. an attempt to use the result of dereferencing a null pointer (as by calling a method through a null object pointer), it'd branch to a routine that'd randomly select from a list of dangerous things to do, such as spewing the contents of /dev/random onto the console, kill -9ing a random process on the system, or zeroing all blocks on a random attached storage device. In addition, the compiler would check for any undefined behavior it could feasibly check for, and take similar actions when asked to compiler such code. Thus, trying to compile "x = a++ - a++;" might result in your system disk being wiped.
The goal: to impress upon programmers that you cannot rely on undefined behavior. At all. Not on what it does, not even that it does the same thing all the time. The only guarantee you have is that you won't like what'll happen. So avoid it like the plague, or pay the price.
So, a sadistic compiler. At run time it'd check for various sorts of undefined behavior. When it detected them, eg. an attempt to use the result of dereferencing a null pointer (as by calling a method through a null object pointer), it'd branch to a routine that'd randomly select from a list of dangerous things to do, such as spewing the contents of /dev/random onto the console, kill -9ing a random process on the system, or zeroing all blocks on a random attached storage device. In addition, the compiler would check for any undefined behavior it could feasibly check for, and take similar actions when asked to compiler such code. Thus, trying to compile "x = a++ - a++;" might result in your system disk being wiped.
The goal: to impress upon programmers that you cannot rely on undefined behavior. At all. Not on what it does, not even that it does the same thing all the time. The only guarantee you have is that you won't like what'll happen. So avoid it like the plague, or pay the price.
Labels:
software
Saturday, January 3, 2009
Zune lock-up bug
Owners of Microsoft's Zune MP3 player saw their devices lock up hard at the end of 2008. It turns out there's a leap-year bug in Microsoft's code. The clock in the Zune records time as the number of days and seconds since 1/1/1980. To convert that into a normal calendar time, the Zune starts with this code to convert the number of days to years and days:
It's basically looping through incrementing the year and decrementing days by the number of days in that year until it's got less than a full year's worth of days left. The problem comes on the last day of a leap year. In that case, days will be 366 and isLeapYear() will return true. The loop won't terminate because days is still greater than 365. But the leap-year path inside the loop won't decrement the days because days isn't greater than 366. End result: infinite loop on 12/31 of any leap year. This bug should've been caught during standard testing. Leap years are a well-known edge case when dealing with dates, likewise the first and last days of the year and the transition from one year to the next are standard problem points where any errors tend to show up.
Microsoft's proposed solution: wait until sufficiently far into 1/1 of the next year, then force a hard reset of your Zune. Yeah, that'll get your Zune working but it doesn't fix the bug. Bets that Microsoft's fix for this code causes a different kind of failure on 1/1/2013?
year = ORIGINYEAR;
while (days > 365)
{
if (IsLeapYear(year))
{
if (days > 366)
{
days -= 366;
year += 1;
}
}
else
{
days -= 365;
year += 1;
}
}
It's basically looping through incrementing the year and decrementing days by the number of days in that year until it's got less than a full year's worth of days left. The problem comes on the last day of a leap year. In that case, days will be 366 and isLeapYear() will return true. The loop won't terminate because days is still greater than 365. But the leap-year path inside the loop won't decrement the days because days isn't greater than 366. End result: infinite loop on 12/31 of any leap year. This bug should've been caught during standard testing. Leap years are a well-known edge case when dealing with dates, likewise the first and last days of the year and the transition from one year to the next are standard problem points where any errors tend to show up.
Microsoft's proposed solution: wait until sufficiently far into 1/1 of the next year, then force a hard reset of your Zune. Yeah, that'll get your Zune working but it doesn't fix the bug. Bets that Microsoft's fix for this code causes a different kind of failure on 1/1/2013?
Friday, January 2, 2009
JournalSpace mistakes mirroring for backups, dies
JournalSpace is dead.
Short form: JournalSpace depended on drive mirroring for back-ups. Something proceeded to overwrite the drives with zeros, and the mirroring politely overwrote the mirrors with zeros too. Their entire database is gone, all blogs, everything.
Repeat after me: mirroring is not a back-up. RAID and drive mirroring are for reliability and fault-tolerance. They'll protect you against hardware failure. They won't protect you against software doing something stupid or malicious. If the software says "Write this data to this location on the disk.", mirroring software and RAID drivers won't, I repeat will not, not write the data. If you're depending on your mirrors to contain something other than exactly what the main drive contains, well, you'll end up where JournalSpace is. You need point-in-time backups to external media, something that won't duplicate what's on the main drive unless and until you do the duplication yourself. That's the only way to insure that, if your software writes corrupted data to your main disks, the backups don't have the same corruption written to them as long as you catch the problem before you overwrite the backups with new ones. This is also, BTW, why you have more than one set of backup media: so if you do run your backups before catching a problem, you've got older backups to fall back on.
This should also be a cautionary tale for anybody wanting to host their data, application or whatever "in the cloud". If you do, and it's at all important, make sure you a) can and do make your own local backups of everything and b) have a fall-back plan in the event "the cloud" suddenly becomes unavailable. Unless, of course, you want to end up like the people who had their journals on JournalSpace: everything gone, no way to recover, because of somebody else's screw-up.
Short form: JournalSpace depended on drive mirroring for back-ups. Something proceeded to overwrite the drives with zeros, and the mirroring politely overwrote the mirrors with zeros too. Their entire database is gone, all blogs, everything.
Repeat after me: mirroring is not a back-up. RAID and drive mirroring are for reliability and fault-tolerance. They'll protect you against hardware failure. They won't protect you against software doing something stupid or malicious. If the software says "Write this data to this location on the disk.", mirroring software and RAID drivers won't, I repeat will not, not write the data. If you're depending on your mirrors to contain something other than exactly what the main drive contains, well, you'll end up where JournalSpace is. You need point-in-time backups to external media, something that won't duplicate what's on the main drive unless and until you do the duplication yourself. That's the only way to insure that, if your software writes corrupted data to your main disks, the backups don't have the same corruption written to them as long as you catch the problem before you overwrite the backups with new ones. This is also, BTW, why you have more than one set of backup media: so if you do run your backups before catching a problem, you've got older backups to fall back on.
This should also be a cautionary tale for anybody wanting to host their data, application or whatever "in the cloud". If you do, and it's at all important, make sure you a) can and do make your own local backups of everything and b) have a fall-back plan in the event "the cloud" suddenly becomes unavailable. Unless, of course, you want to end up like the people who had their journals on JournalSpace: everything gone, no way to recover, because of somebody else's screw-up.
Labels:
backup,
cloud computing,
software
Tuesday, December 16, 2008
My network layout
For those interested, this is how I lay out my network.
The center of the network is the gateway/router box. It's got three network interfaces on it. eth0 connects to the wired LAN, network 192.168.171.0/24. Off the gateway is an 8-port switch that handles the computer room, with a run out to the living room and a 5-port switch for the game console and my laptop when I'm out on the couch or I've run a cable outside to sit on the walkway. It's all 100-megabit Ethernet, I'm planning on upgrading to gigabit at some point. Outgoing connections/flows from this network are relatively unrestricted. The only blocks are for DNS, it's only permitted to the gateway box.
eth1 on the gateway connects to a 5-port switch where the wireless access points are attached on network 192.168.217.0/24. This network is considered untrusted, forwarding from it to other networks isn't permitted at the router so machines on it can only talk to each other or the router. In addition the router blocks incoming traffic on that interface except for DHCP and IPSec. This limits what a rogue machine on that network can do. The access points don't have WEP/WPA enabled, but they do do MAC filtering. When I upgrade to a faster AP I may enable WPA using authentication just to annoy the kiddies. The primary use for this network is to carry the IPSec VPN traffic on the 192.168.33.0/24 network. This network is considered trusted, and has the same outbound restrictions as the wired LAN segment. Forwarding between the wired LAN and VPN segments is unrestricted and un-NATed.
eth2 on the gateway is connected to the cable modem and gets it's address via DHCP from Cox. Traffic from the wired LAN and VPN going out this interface is NATed. Incoming new connections are generally blocked, with specific holes opened for connection to services on the gateway machine (ssh, Apache on a high port, the DHCP client). The outside world is considered untrusted. Outgoing traffic has a few restrictions to prevent un-NATed addresses from escaping.
Most security is based on limited physical access to the hard-wired network. The wireless network that can be reached from outside the apartment is treated the same as any network I don't control. Laptops on it should be firewalled as if operating on a hostile network, and use IPSec to make a VPN connection to the rest of my network. This avoids my having to rely on hardware I don't completely control for security.
The center of the network is the gateway/router box. It's got three network interfaces on it. eth0 connects to the wired LAN, network 192.168.171.0/24. Off the gateway is an 8-port switch that handles the computer room, with a run out to the living room and a 5-port switch for the game console and my laptop when I'm out on the couch or I've run a cable outside to sit on the walkway. It's all 100-megabit Ethernet, I'm planning on upgrading to gigabit at some point. Outgoing connections/flows from this network are relatively unrestricted. The only blocks are for DNS, it's only permitted to the gateway box.
eth1 on the gateway connects to a 5-port switch where the wireless access points are attached on network 192.168.217.0/24. This network is considered untrusted, forwarding from it to other networks isn't permitted at the router so machines on it can only talk to each other or the router. In addition the router blocks incoming traffic on that interface except for DHCP and IPSec. This limits what a rogue machine on that network can do. The access points don't have WEP/WPA enabled, but they do do MAC filtering. When I upgrade to a faster AP I may enable WPA using authentication just to annoy the kiddies. The primary use for this network is to carry the IPSec VPN traffic on the 192.168.33.0/24 network. This network is considered trusted, and has the same outbound restrictions as the wired LAN segment. Forwarding between the wired LAN and VPN segments is unrestricted and un-NATed.
eth2 on the gateway is connected to the cable modem and gets it's address via DHCP from Cox. Traffic from the wired LAN and VPN going out this interface is NATed. Incoming new connections are generally blocked, with specific holes opened for connection to services on the gateway machine (ssh, Apache on a high port, the DHCP client). The outside world is considered untrusted. Outgoing traffic has a few restrictions to prevent un-NATed addresses from escaping.
Most security is based on limited physical access to the hard-wired network. The wireless network that can be reached from outside the apartment is treated the same as any network I don't control. Laptops on it should be firewalled as if operating on a hostile network, and use IPSec to make a VPN connection to the rest of my network. This avoids my having to rely on hardware I don't completely control for security.
Tuesday, December 9, 2008
DNSChanger malware
The Washington Post's security blog is reporting on the DNSChanger malware. This stuff isn't new. It does two things: changes your computer's DNS resolver settings so it uses DNS servers belonging to the bad guys instead of the servers your ISP provides, and activates a DHCP server so that other computers (say ones attaching to your wireless network) will take addresses and settings (including DNS server settings) from the malware instead of the legitimate server. The result is that everything you do, and everything those machines do, gets funneled through the bad guys' systems along the way. It's hard for security software to detect this, there's no mismatches and no spoofing to spot as a clue there's something wrong.
On my network, this won't work. The DHCP server part might, within limits. But my wireless access point is on a seperate physical network from the hard-wired machines and the firewall blocks the DHCP protocol between the networks. With the VPN active, that limits the damage the rogue DHCP server can do. And my firewall also blocks the DNS protocol at the edge of my network. While on my network you simply can't use any DNS server except the one I provide. If you try, your query packets will get rejected by the firewall when they try to leave my network. That means if you do get infected by DNSChanger, your DNS will simply stop working completely on my network until the problem's fixed. And my gateway machine, the one machine on my network that gets to do DNS queries to the outside world, doesn't run Windows, isn't used for Web browsing and isn't very susceptible to infection by malware.
On my network, this won't work. The DHCP server part might, within limits. But my wireless access point is on a seperate physical network from the hard-wired machines and the firewall blocks the DHCP protocol between the networks. With the VPN active, that limits the damage the rogue DHCP server can do. And my firewall also blocks the DNS protocol at the edge of my network. While on my network you simply can't use any DNS server except the one I provide. If you try, your query packets will get rejected by the firewall when they try to leave my network. That means if you do get infected by DNSChanger, your DNS will simply stop working completely on my network until the problem's fixed. And my gateway machine, the one machine on my network that gets to do DNS queries to the outside world, doesn't run Windows, isn't used for Web browsing and isn't very susceptible to infection by malware.
Wednesday, December 3, 2008
BodyParts webapp
I finished the first pass at the BodyParts webapp that lets me view and maintain the Legends&Lore body parts inventory for my guild in EQ2. Security framework enabled, anonymous viewing and a login for editing, Tomcat set up properly, Apache configured to front for it. Now I can update the inventory from the in-game browser, no more writing things down on paper and windowing out to update text files.
Next things on the list:
Next things on the list:
- Add add/subtract fields and buttons to the editing screen so that instead of having to do the math in my head I can just punch in the number of parts and hit Add or Subtract depending on whether I'm depositing or withdrawing parts.
- Move the inventory from an XML data file into a real database table. I'll want to move the user information into a database table in the process and change authentication to match.
- Reconfigure things to have all webapps in their own sub-tree so they don't sit directly at the root of the content tree. I'll have to see whether I want Apache to re-write URLs (so the user sees /webapp/BodyParts/ while Tomcat sees just /BodyParts/) or whether I want to make the change within Tomcat.
- Change things in Eclipse to not depend on MyEclipse for the Spring framework and supporting libraries. I'm finding that as I get more familiar with things it's easier to edit the XML directly than to depend on MyEclipse's tools.
Labels:
java,
software,
spring framework,
web application
Monday, December 1, 2008
MS claims in Vista lawsuit
Microsoft is currently embroiled in a lawsuit over Windows Vista. The plaintiffs there are claiming that Microsoft falsely advertised machines as Vista Capable when they weren't capable of running the Vista shown in advertisements. One of Microsoft's responses to this is that Vista Basic, which the machines are capable of running, is a real version of Vista and therefore there wasn't any false advertising. This isn't going to fly. The question isn't whether Vista Basic is a real version of Vista, it's whether it's the Vista advertised to everyone. And it isn't. It's missing almost all the elements shown in the advertisements, and nowhere in the ads does it say that what's shown may not be present. In fact all the ads emphasize all those elements shown as the things that make Vista Vista. That's going to kill Microsoft.
This isn't a software trademark or copyright case. This is straight-up consumer law that's been around almost as long as car salesmen have been. If you advertise a car on TV and show the supercharged V8, 6-speed manual transmission, sports suspension, full leather interior, full power everything with sunroof version, and that's all you show, and you end the add with "Starting from $9000.", then courts have held that you need to deliver the car as advertised starting at $9000. If the only model you offer at $9000 is the 4-cylinder, 4-speed automatic, cheap cloth interior, manual everything stripped-down model, the courts will nail you for false advertising. You did the advertisement knowing you couldn't and wouldn't deliver what was shown for the price you quoted, and you don't get to do that. Which is why every car advertisement, when they show the price, always says in readable text underneath the "Starting at" stuff something like "Base price. As shown, $LARGER_NUMBER.". And Microsoft didn't do the equivalent. They showed Vista with the Aero interface, emphasized the Aero interface as a defining characteristic of Vista, and never gave a hint in the ads that Vista might not actually contain the Aero interface. A reasonable person, looking at the ads and the "Vista Capable" sticker, would conclude that that sticker meant the machine was capable of running what Microsoft was advertising as Vista. And it can't. And all those e-mails from Microsoft execs show that Microsoft knew it. Bam. Stick a fork in 'em, they're done. They can wriggle all they want, but when it comes down to it they're going to lose on that point for the same reason car dealers lost and had to start adding that explanatory text.
This isn't a software trademark or copyright case. This is straight-up consumer law that's been around almost as long as car salesmen have been. If you advertise a car on TV and show the supercharged V8, 6-speed manual transmission, sports suspension, full leather interior, full power everything with sunroof version, and that's all you show, and you end the add with "Starting from $9000.", then courts have held that you need to deliver the car as advertised starting at $9000. If the only model you offer at $9000 is the 4-cylinder, 4-speed automatic, cheap cloth interior, manual everything stripped-down model, the courts will nail you for false advertising. You did the advertisement knowing you couldn't and wouldn't deliver what was shown for the price you quoted, and you don't get to do that. Which is why every car advertisement, when they show the price, always says in readable text underneath the "Starting at" stuff something like "Base price. As shown, $LARGER_NUMBER.". And Microsoft didn't do the equivalent. They showed Vista with the Aero interface, emphasized the Aero interface as a defining characteristic of Vista, and never gave a hint in the ads that Vista might not actually contain the Aero interface. A reasonable person, looking at the ads and the "Vista Capable" sticker, would conclude that that sticker meant the machine was capable of running what Microsoft was advertising as Vista. And it can't. And all those e-mails from Microsoft execs show that Microsoft knew it. Bam. Stick a fork in 'em, they're done. They can wriggle all they want, but when it comes down to it they're going to lose on that point for the same reason car dealers lost and had to start adding that explanatory text.
Labels:
advertising,
law,
microsoft
Friday, November 28, 2008
The Srizbi botnet
After being taken down when McColo was shut down, then resurrecting itself, the Srizbi botnet has been taken down once again. It won't last, though, the botnet will start searching domain names trying to reestablish contact with it's command-and-control network, and it's operators will probably succeed We need new tactics:
- Instead of trying to block the botnet when it searches for a new C&C network, intercept it. Put something up at the next few domains it'll try that will respond correctly and take control of the botnet. Once you've got control, use that control to have the botnet download a new module that'll remove the botnet software from the computer, then shut down the computer until the user reinstalls Windows.
- Start sanctioning users who allow their computers to become infected. Right now there's no significant penalty assessed against the people who continue to allow their computers to be used by the botnet month after month after month. Start penalizing them. If they get infected, their Internet access gets suspended until they can provide evidence to their ISP that they've cleaned their machine up. Second offense, they get a 3-month suspension. Third offense, they get permanently disconnected. It's the Information Superhighway, and just like the real road system if you accumulate too many points we take away your driver's license.
Friday, November 21, 2008
Spam volume
Early last week McColo was shut down. They provided hosting to nearly half the spam-related community. Some people were saying it wouldn't make a difference, the spammers would just move to different hosts and spam would pick up again. Well, according to SpamCop's statistics, spam hasn't even started to return to it's previous levels yet. You can see the near-vertical drop on the 11th when McColo was cut off, and peak levels since then have held pretty steady. I think one of the reasons is that other hosts looked at what happened to McColo and said "We don't want that happening to us.". I mean, it was pretty major: all of McColo's upstream providers simply pulled the plug on them, terminated the contracts and turned off the interconnect ports. When a spammer who got caught in that comes to another hosting provider, that provider's got to look at the potential down-side of accepting the spammer: complete and total loss of all their business. And they can't say "Oh, that'll never happen.", because McColo is staring them in the face saying "Yes, it will.".
This is, frankly, what we need more of: providers who serve the spammers facing a credible threat of being cut off from the Net if they don't do something about the problem on their network. For the provider it's a question of money, and the best way to change their behavior is to change the cost-benefit equation so the cost of hosting a spammer is higher than the benefit from them.
This is, frankly, what we need more of: providers who serve the spammers facing a credible threat of being cut off from the Net if they don't do something about the problem on their network. For the provider it's a question of money, and the best way to change their behavior is to change the cost-benefit equation so the cost of hosting a spammer is higher than the benefit from them.
Wednesday, November 5, 2008
BodyParts webapp
Slowly but surely I'm getting my head around writing a web app using the Spring framework. It's requiring a fair amount of work, but it's much easier to understand when I'm actually writing code and seeing it work (or, more often, fail miserably). I need to get error handling working, then adding a new species, and then adding the security bits to allow some users to edit and others to only view the data. Once I'm done I'll not only have a handle on Spring, I'll have a way to edit my guild's body-parts database while I'm in-game.
Labels:
java,
software,
spring framework,
web application
Wednesday, October 29, 2008
Projects I need to do
- Clean up LJBackup. It's been sitting on the back burner for way too long.
- Take the EQ2 body-part inventory system I've got, that's currently based on a hand-maintained CSV file, and convert it into a Spring MVC web app. For low-volume stuff I can run it under Tomcat on an odd port on my gateway machine. That'll let me update the inventory through the in-game browser while I'm actually moving body parts in and out. It'll also be a useful show-off project if I want to move into Java and/or web-app work.
- Create a utility to back up Blogger blogs, both entries and comments, locally. This should be easier than LJBackup.
- Multiple new computers. I've got the guts for a new gateway box (obsolete AMD dual-core Socket 939), just need a case for it. I can turn the existing machines into two boxes sufficient for the kids. That just leaves main/gaming systems.
- I need to scheme some way of getting a hosted server and the time to configure and maintain it correctly. I want to handle e-mail, Web server, DNS and such myself and drop the XMission hosting.
Labels:
hardware,
network,
silverglass,
software
Friday, October 24, 2008
Computer models and the credit crisis
This is more political than normal, but it touches on tech so I'm putting it here. Basically, in the Congressional hearings, Alan Greenspan blamed computer models in large part for the mortgage and credit crisis. I'm sorry, but that's not so. The fault isn't even with the data fed to the models. The fault lies with the people in the banking industry who looked at the models and said "What do we need to feed these models to get the results we want?". They didn't just feed the models wrong data, they outright manipulated the data they were feeding the models to insure the models gave them a specific answer. That's always a recipe for disaster. They knew the models were telling them those sub-prime loans were bad bets, so they created "stated income" rules so they could feed higher incomes to the models and make the models say the loans weren't as risky. They created credit-default swap instruments that they could factor into the models to make the loans appear less risky, and then decided not to factor in the risk of those credit-default swaps also failing.
The banking industry decided what answers they wanted, then hunted around until they got models and inputs that produced the desired result. You can't do that with any sort of model. And if you do, don't blame the models and the techs for your failure to listen to what the models were telling you when that didn't agree with what you wanted to hear.
But then, we see that in IT from management all the time. The techs say "If we do X, we're going to see these problems.". Management berates the techs for being obstructive and standing in the way of doing X when the rest of the company's agreed on it. Then all the predicted problems occur, and management berates the techs again for allowing those problems to happen. PHBs in action.
The banking industry decided what answers they wanted, then hunted around until they got models and inputs that produced the desired result. You can't do that with any sort of model. And if you do, don't blame the models and the techs for your failure to listen to what the models were telling you when that didn't agree with what you wanted to hear.
But then, we see that in IT from management all the time. The techs say "If we do X, we're going to see these problems.". Management berates the techs for being obstructive and standing in the way of doing X when the rest of the company's agreed on it. Then all the predicted problems occur, and management berates the techs again for allowing those problems to happen. PHBs in action.
Labels:
software
Wednesday, October 1, 2008
Monday, September 22, 2008
Browser process models
Everyone's praising IE8 for it's new one-process-per-tab model. It's got many advantages over the threaded model used by most browsers, including the fact that a crash of one tab can't take other tabs or the browser down. What most people don't seem to get is that the multiple-process model isn't new with IE8, and in fact the threaded model was one adopted only over the strenuous objections of everybody else. You see, threads exist for one reason and one reason only: on VMS and Windows (which was designed in part by the guy responsible for designing VMS), creating a process is expensive. You need threads in those OSes because you can't create a lot of processes quickly. But Unix had cheap process creation from day one. You needed another thread in Unix, you forked off another process. Threads weren't needed. But everybody in the Windows community kept braying about needing threads and why didn't Unix have them, oblivious to the fact that they already had what threads did in the fork() call. So Unix finally, reluctantly, adopted threads with all their pitfalls. And programmers used them heavily when processes would've been more appropriate. Until the pitfalls finally became too much to live with, when they went from just driving programmers nuts to causing problems for the very users who demanded them in the first place. So now we're back to where Unix was 25 years ago, forking a new process when you need a new thread of execution.
Labels:
microsoft,
open source,
security,
software,
unix
Thursday, September 18, 2008
Cooling a data center without AC
http://weblog.infoworld.com/sustainableit/archives/2008/09/intel_air_side.html
Intel has done a test, cooling a high-load data center model without using air conditioning, just ambient outside air. They did see a higher failure rate on the equipment, but not nearly as much higher as was expected. And the portion that used a simple, cheap cooling system without all the climate control of a true data-center AC unit had a lower server failure rate than full-on data-center-class AC yielded. My feeling is that it's not the cold air that's critical to data-center health, the servers will be entirely happy with 80-90F cooling air. It's mostly the dust and to a lesser degree the order-of-magnitude changes in humidity that cause most of the failures. Scrub the dust from the air (even cheap AC units do this, and simple filter systems can do it too) and keep the humidity in the 5-35% range (no need to control it to +/- 2%) and you'll give the servers 95% of what they want to be happy. And you can do that for a fraction the cost of a full climate-control system that controls everything to within 1%.
Intel has done a test, cooling a high-load data center model without using air conditioning, just ambient outside air. They did see a higher failure rate on the equipment, but not nearly as much higher as was expected. And the portion that used a simple, cheap cooling system without all the climate control of a true data-center AC unit had a lower server failure rate than full-on data-center-class AC yielded. My feeling is that it's not the cold air that's critical to data-center health, the servers will be entirely happy with 80-90F cooling air. It's mostly the dust and to a lesser degree the order-of-magnitude changes in humidity that cause most of the failures. Scrub the dust from the air (even cheap AC units do this, and simple filter systems can do it too) and keep the humidity in the 5-35% range (no need to control it to +/- 2%) and you'll give the servers 95% of what they want to be happy. And you can do that for a fraction the cost of a full climate-control system that controls everything to within 1%.
Labels:
hardware
Tuesday, September 16, 2008
Need to update site
I really need to start updating Silverglass (the Web site) again. It used to have a lot of pages about the nitty-gritty of configuring RedHat Linux. I've long since switched to Debian, and the configuration tools have gotten a lot better since the days when getting e-mail address masquerading working involved manually editing sendmail.mc, but there's still things I'd like to document. Setting up the zone files for a DNS server, for example, and configuring a DNS "blackhole" list (a set of DNS domains you want to make not exist anymore, eg. DoubleClick's domains, so that Web advertisements and such just vanish). And some things haven't changed, setting up a firewall for example, and details of Debian's startup scripts and how to play nice with them when writing your own is still useful information. And of course there's rants I want to write but haven't, the originals would be on my blogs of course but copies can go on the Web site. Then there's code projects like the LJBackup program I can finish up and drop on the site.
I just need to get time and energy together and actually do it.
I just need to get time and energy together and actually do it.
Labels:
silverglass
Subscribe to:
Posts (Atom)