For those interested, this is how I lay out my network.
The center of the network is the gateway/router box. It's got three network interfaces on it. eth0 connects to the wired LAN, network 192.168.171.0/24. Off the gateway is an 8-port switch that handles the computer room, with a run out to the living room and a 5-port switch for the game console and my laptop when I'm out on the couch or I've run a cable outside to sit on the walkway. It's all 100-megabit Ethernet, I'm planning on upgrading to gigabit at some point. Outgoing connections/flows from this network are relatively unrestricted. The only blocks are for DNS, it's only permitted to the gateway box.
eth1 on the gateway connects to a 5-port switch where the wireless access points are attached on network 192.168.217.0/24. This network is considered untrusted, forwarding from it to other networks isn't permitted at the router so machines on it can only talk to each other or the router. In addition the router blocks incoming traffic on that interface except for DHCP and IPSec. This limits what a rogue machine on that network can do. The access points don't have WEP/WPA enabled, but they do do MAC filtering. When I upgrade to a faster AP I may enable WPA using authentication just to annoy the kiddies. The primary use for this network is to carry the IPSec VPN traffic on the 192.168.33.0/24 network. This network is considered trusted, and has the same outbound restrictions as the wired LAN segment. Forwarding between the wired LAN and VPN segments is unrestricted and un-NATed.
eth2 on the gateway is connected to the cable modem and gets it's address via DHCP from Cox. Traffic from the wired LAN and VPN going out this interface is NATed. Incoming new connections are generally blocked, with specific holes opened for connection to services on the gateway machine (ssh, Apache on a high port, the DHCP client). The outside world is considered untrusted. Outgoing traffic has a few restrictions to prevent un-NATed addresses from escaping.
Most security is based on limited physical access to the hard-wired network. The wireless network that can be reached from outside the apartment is treated the same as any network I don't control. Laptops on it should be firewalled as if operating on a hostile network, and use IPSec to make a VPN connection to the rest of my network. This avoids my having to rely on hardware I don't completely control for security.
Tuesday, December 16, 2008
Tuesday, December 9, 2008
DNSChanger malware
The Washington Post's security blog is reporting on the DNSChanger malware. This stuff isn't new. It does two things: changes your computer's DNS resolver settings so it uses DNS servers belonging to the bad guys instead of the servers your ISP provides, and activates a DHCP server so that other computers (say ones attaching to your wireless network) will take addresses and settings (including DNS server settings) from the malware instead of the legitimate server. The result is that everything you do, and everything those machines do, gets funneled through the bad guys' systems along the way. It's hard for security software to detect this, there's no mismatches and no spoofing to spot as a clue there's something wrong.
On my network, this won't work. The DHCP server part might, within limits. But my wireless access point is on a seperate physical network from the hard-wired machines and the firewall blocks the DHCP protocol between the networks. With the VPN active, that limits the damage the rogue DHCP server can do. And my firewall also blocks the DNS protocol at the edge of my network. While on my network you simply can't use any DNS server except the one I provide. If you try, your query packets will get rejected by the firewall when they try to leave my network. That means if you do get infected by DNSChanger, your DNS will simply stop working completely on my network until the problem's fixed. And my gateway machine, the one machine on my network that gets to do DNS queries to the outside world, doesn't run Windows, isn't used for Web browsing and isn't very susceptible to infection by malware.
On my network, this won't work. The DHCP server part might, within limits. But my wireless access point is on a seperate physical network from the hard-wired machines and the firewall blocks the DHCP protocol between the networks. With the VPN active, that limits the damage the rogue DHCP server can do. And my firewall also blocks the DNS protocol at the edge of my network. While on my network you simply can't use any DNS server except the one I provide. If you try, your query packets will get rejected by the firewall when they try to leave my network. That means if you do get infected by DNSChanger, your DNS will simply stop working completely on my network until the problem's fixed. And my gateway machine, the one machine on my network that gets to do DNS queries to the outside world, doesn't run Windows, isn't used for Web browsing and isn't very susceptible to infection by malware.
Wednesday, December 3, 2008
BodyParts webapp
I finished the first pass at the BodyParts webapp that lets me view and maintain the Legends&Lore body parts inventory for my guild in EQ2. Security framework enabled, anonymous viewing and a login for editing, Tomcat set up properly, Apache configured to front for it. Now I can update the inventory from the in-game browser, no more writing things down on paper and windowing out to update text files.
Next things on the list:
Next things on the list:
- Add add/subtract fields and buttons to the editing screen so that instead of having to do the math in my head I can just punch in the number of parts and hit Add or Subtract depending on whether I'm depositing or withdrawing parts.
- Move the inventory from an XML data file into a real database table. I'll want to move the user information into a database table in the process and change authentication to match.
- Reconfigure things to have all webapps in their own sub-tree so they don't sit directly at the root of the content tree. I'll have to see whether I want Apache to re-write URLs (so the user sees /webapp/BodyParts/ while Tomcat sees just /BodyParts/) or whether I want to make the change within Tomcat.
- Change things in Eclipse to not depend on MyEclipse for the Spring framework and supporting libraries. I'm finding that as I get more familiar with things it's easier to edit the XML directly than to depend on MyEclipse's tools.
Labels:
java,
software,
spring framework,
web application
Monday, December 1, 2008
MS claims in Vista lawsuit
Microsoft is currently embroiled in a lawsuit over Windows Vista. The plaintiffs there are claiming that Microsoft falsely advertised machines as Vista Capable when they weren't capable of running the Vista shown in advertisements. One of Microsoft's responses to this is that Vista Basic, which the machines are capable of running, is a real version of Vista and therefore there wasn't any false advertising. This isn't going to fly. The question isn't whether Vista Basic is a real version of Vista, it's whether it's the Vista advertised to everyone. And it isn't. It's missing almost all the elements shown in the advertisements, and nowhere in the ads does it say that what's shown may not be present. In fact all the ads emphasize all those elements shown as the things that make Vista Vista. That's going to kill Microsoft.
This isn't a software trademark or copyright case. This is straight-up consumer law that's been around almost as long as car salesmen have been. If you advertise a car on TV and show the supercharged V8, 6-speed manual transmission, sports suspension, full leather interior, full power everything with sunroof version, and that's all you show, and you end the add with "Starting from $9000.", then courts have held that you need to deliver the car as advertised starting at $9000. If the only model you offer at $9000 is the 4-cylinder, 4-speed automatic, cheap cloth interior, manual everything stripped-down model, the courts will nail you for false advertising. You did the advertisement knowing you couldn't and wouldn't deliver what was shown for the price you quoted, and you don't get to do that. Which is why every car advertisement, when they show the price, always says in readable text underneath the "Starting at" stuff something like "Base price. As shown, $LARGER_NUMBER.". And Microsoft didn't do the equivalent. They showed Vista with the Aero interface, emphasized the Aero interface as a defining characteristic of Vista, and never gave a hint in the ads that Vista might not actually contain the Aero interface. A reasonable person, looking at the ads and the "Vista Capable" sticker, would conclude that that sticker meant the machine was capable of running what Microsoft was advertising as Vista. And it can't. And all those e-mails from Microsoft execs show that Microsoft knew it. Bam. Stick a fork in 'em, they're done. They can wriggle all they want, but when it comes down to it they're going to lose on that point for the same reason car dealers lost and had to start adding that explanatory text.
This isn't a software trademark or copyright case. This is straight-up consumer law that's been around almost as long as car salesmen have been. If you advertise a car on TV and show the supercharged V8, 6-speed manual transmission, sports suspension, full leather interior, full power everything with sunroof version, and that's all you show, and you end the add with "Starting from $9000.", then courts have held that you need to deliver the car as advertised starting at $9000. If the only model you offer at $9000 is the 4-cylinder, 4-speed automatic, cheap cloth interior, manual everything stripped-down model, the courts will nail you for false advertising. You did the advertisement knowing you couldn't and wouldn't deliver what was shown for the price you quoted, and you don't get to do that. Which is why every car advertisement, when they show the price, always says in readable text underneath the "Starting at" stuff something like "Base price. As shown, $LARGER_NUMBER.". And Microsoft didn't do the equivalent. They showed Vista with the Aero interface, emphasized the Aero interface as a defining characteristic of Vista, and never gave a hint in the ads that Vista might not actually contain the Aero interface. A reasonable person, looking at the ads and the "Vista Capable" sticker, would conclude that that sticker meant the machine was capable of running what Microsoft was advertising as Vista. And it can't. And all those e-mails from Microsoft execs show that Microsoft knew it. Bam. Stick a fork in 'em, they're done. They can wriggle all they want, but when it comes down to it they're going to lose on that point for the same reason car dealers lost and had to start adding that explanatory text.
Labels:
advertising,
law,
microsoft
Friday, November 28, 2008
The Srizbi botnet
After being taken down when McColo was shut down, then resurrecting itself, the Srizbi botnet has been taken down once again. It won't last, though, the botnet will start searching domain names trying to reestablish contact with it's command-and-control network, and it's operators will probably succeed We need new tactics:
- Instead of trying to block the botnet when it searches for a new C&C network, intercept it. Put something up at the next few domains it'll try that will respond correctly and take control of the botnet. Once you've got control, use that control to have the botnet download a new module that'll remove the botnet software from the computer, then shut down the computer until the user reinstalls Windows.
- Start sanctioning users who allow their computers to become infected. Right now there's no significant penalty assessed against the people who continue to allow their computers to be used by the botnet month after month after month. Start penalizing them. If they get infected, their Internet access gets suspended until they can provide evidence to their ISP that they've cleaned their machine up. Second offense, they get a 3-month suspension. Third offense, they get permanently disconnected. It's the Information Superhighway, and just like the real road system if you accumulate too many points we take away your driver's license.
Friday, November 21, 2008
Spam volume
Early last week McColo was shut down. They provided hosting to nearly half the spam-related community. Some people were saying it wouldn't make a difference, the spammers would just move to different hosts and spam would pick up again. Well, according to SpamCop's statistics, spam hasn't even started to return to it's previous levels yet. You can see the near-vertical drop on the 11th when McColo was cut off, and peak levels since then have held pretty steady. I think one of the reasons is that other hosts looked at what happened to McColo and said "We don't want that happening to us.". I mean, it was pretty major: all of McColo's upstream providers simply pulled the plug on them, terminated the contracts and turned off the interconnect ports. When a spammer who got caught in that comes to another hosting provider, that provider's got to look at the potential down-side of accepting the spammer: complete and total loss of all their business. And they can't say "Oh, that'll never happen.", because McColo is staring them in the face saying "Yes, it will.".
This is, frankly, what we need more of: providers who serve the spammers facing a credible threat of being cut off from the Net if they don't do something about the problem on their network. For the provider it's a question of money, and the best way to change their behavior is to change the cost-benefit equation so the cost of hosting a spammer is higher than the benefit from them.
This is, frankly, what we need more of: providers who serve the spammers facing a credible threat of being cut off from the Net if they don't do something about the problem on their network. For the provider it's a question of money, and the best way to change their behavior is to change the cost-benefit equation so the cost of hosting a spammer is higher than the benefit from them.
Wednesday, November 5, 2008
BodyParts webapp
Slowly but surely I'm getting my head around writing a web app using the Spring framework. It's requiring a fair amount of work, but it's much easier to understand when I'm actually writing code and seeing it work (or, more often, fail miserably). I need to get error handling working, then adding a new species, and then adding the security bits to allow some users to edit and others to only view the data. Once I'm done I'll not only have a handle on Spring, I'll have a way to edit my guild's body-parts database while I'm in-game.
Labels:
java,
software,
spring framework,
web application
Wednesday, October 29, 2008
Projects I need to do
- Clean up LJBackup. It's been sitting on the back burner for way too long.
- Take the EQ2 body-part inventory system I've got, that's currently based on a hand-maintained CSV file, and convert it into a Spring MVC web app. For low-volume stuff I can run it under Tomcat on an odd port on my gateway machine. That'll let me update the inventory through the in-game browser while I'm actually moving body parts in and out. It'll also be a useful show-off project if I want to move into Java and/or web-app work.
- Create a utility to back up Blogger blogs, both entries and comments, locally. This should be easier than LJBackup.
- Multiple new computers. I've got the guts for a new gateway box (obsolete AMD dual-core Socket 939), just need a case for it. I can turn the existing machines into two boxes sufficient for the kids. That just leaves main/gaming systems.
- I need to scheme some way of getting a hosted server and the time to configure and maintain it correctly. I want to handle e-mail, Web server, DNS and such myself and drop the XMission hosting.
Labels:
hardware,
network,
silverglass,
software
Friday, October 24, 2008
Computer models and the credit crisis
This is more political than normal, but it touches on tech so I'm putting it here. Basically, in the Congressional hearings, Alan Greenspan blamed computer models in large part for the mortgage and credit crisis. I'm sorry, but that's not so. The fault isn't even with the data fed to the models. The fault lies with the people in the banking industry who looked at the models and said "What do we need to feed these models to get the results we want?". They didn't just feed the models wrong data, they outright manipulated the data they were feeding the models to insure the models gave them a specific answer. That's always a recipe for disaster. They knew the models were telling them those sub-prime loans were bad bets, so they created "stated income" rules so they could feed higher incomes to the models and make the models say the loans weren't as risky. They created credit-default swap instruments that they could factor into the models to make the loans appear less risky, and then decided not to factor in the risk of those credit-default swaps also failing.
The banking industry decided what answers they wanted, then hunted around until they got models and inputs that produced the desired result. You can't do that with any sort of model. And if you do, don't blame the models and the techs for your failure to listen to what the models were telling you when that didn't agree with what you wanted to hear.
But then, we see that in IT from management all the time. The techs say "If we do X, we're going to see these problems.". Management berates the techs for being obstructive and standing in the way of doing X when the rest of the company's agreed on it. Then all the predicted problems occur, and management berates the techs again for allowing those problems to happen. PHBs in action.
The banking industry decided what answers they wanted, then hunted around until they got models and inputs that produced the desired result. You can't do that with any sort of model. And if you do, don't blame the models and the techs for your failure to listen to what the models were telling you when that didn't agree with what you wanted to hear.
But then, we see that in IT from management all the time. The techs say "If we do X, we're going to see these problems.". Management berates the techs for being obstructive and standing in the way of doing X when the rest of the company's agreed on it. Then all the predicted problems occur, and management berates the techs again for allowing those problems to happen. PHBs in action.
Labels:
software
Wednesday, October 1, 2008
Monday, September 22, 2008
Browser process models
Everyone's praising IE8 for it's new one-process-per-tab model. It's got many advantages over the threaded model used by most browsers, including the fact that a crash of one tab can't take other tabs or the browser down. What most people don't seem to get is that the multiple-process model isn't new with IE8, and in fact the threaded model was one adopted only over the strenuous objections of everybody else. You see, threads exist for one reason and one reason only: on VMS and Windows (which was designed in part by the guy responsible for designing VMS), creating a process is expensive. You need threads in those OSes because you can't create a lot of processes quickly. But Unix had cheap process creation from day one. You needed another thread in Unix, you forked off another process. Threads weren't needed. But everybody in the Windows community kept braying about needing threads and why didn't Unix have them, oblivious to the fact that they already had what threads did in the fork() call. So Unix finally, reluctantly, adopted threads with all their pitfalls. And programmers used them heavily when processes would've been more appropriate. Until the pitfalls finally became too much to live with, when they went from just driving programmers nuts to causing problems for the very users who demanded them in the first place. So now we're back to where Unix was 25 years ago, forking a new process when you need a new thread of execution.
Labels:
microsoft,
open source,
security,
software,
unix
Thursday, September 18, 2008
Cooling a data center without AC
http://weblog.infoworld.com/sustainableit/archives/2008/09/intel_air_side.html
Intel has done a test, cooling a high-load data center model without using air conditioning, just ambient outside air. They did see a higher failure rate on the equipment, but not nearly as much higher as was expected. And the portion that used a simple, cheap cooling system without all the climate control of a true data-center AC unit had a lower server failure rate than full-on data-center-class AC yielded. My feeling is that it's not the cold air that's critical to data-center health, the servers will be entirely happy with 80-90F cooling air. It's mostly the dust and to a lesser degree the order-of-magnitude changes in humidity that cause most of the failures. Scrub the dust from the air (even cheap AC units do this, and simple filter systems can do it too) and keep the humidity in the 5-35% range (no need to control it to +/- 2%) and you'll give the servers 95% of what they want to be happy. And you can do that for a fraction the cost of a full climate-control system that controls everything to within 1%.
Intel has done a test, cooling a high-load data center model without using air conditioning, just ambient outside air. They did see a higher failure rate on the equipment, but not nearly as much higher as was expected. And the portion that used a simple, cheap cooling system without all the climate control of a true data-center AC unit had a lower server failure rate than full-on data-center-class AC yielded. My feeling is that it's not the cold air that's critical to data-center health, the servers will be entirely happy with 80-90F cooling air. It's mostly the dust and to a lesser degree the order-of-magnitude changes in humidity that cause most of the failures. Scrub the dust from the air (even cheap AC units do this, and simple filter systems can do it too) and keep the humidity in the 5-35% range (no need to control it to +/- 2%) and you'll give the servers 95% of what they want to be happy. And you can do that for a fraction the cost of a full climate-control system that controls everything to within 1%.
Labels:
hardware
Tuesday, September 16, 2008
Need to update site
I really need to start updating Silverglass (the Web site) again. It used to have a lot of pages about the nitty-gritty of configuring RedHat Linux. I've long since switched to Debian, and the configuration tools have gotten a lot better since the days when getting e-mail address masquerading working involved manually editing sendmail.mc, but there's still things I'd like to document. Setting up the zone files for a DNS server, for example, and configuring a DNS "blackhole" list (a set of DNS domains you want to make not exist anymore, eg. DoubleClick's domains, so that Web advertisements and such just vanish). And some things haven't changed, setting up a firewall for example, and details of Debian's startup scripts and how to play nice with them when writing your own is still useful information. And of course there's rants I want to write but haven't, the originals would be on my blogs of course but copies can go on the Web site. Then there's code projects like the LJBackup program I can finish up and drop on the site.
I just need to get time and energy together and actually do it.
I just need to get time and energy together and actually do it.
Labels:
silverglass
Monday, September 15, 2008
Hubble Space Telescope discovers unidentified object
The HST has spotted an unidentified object. (Here's a more serious article.) It's not similar to any type of object previously seen, and it's spectrum doesn't match that of any known object or type of object. There wasn't anything in that part of space before it blipped in, and there's nothing obvious there now that it's faded. So what is it? Nobody's got any good ideas yet.
"The most exciting phrase in science, the one most likely to herald a new discovery, is rarely "Eureka!". More often it's "That's funny. It's not supposed to do that, is it?"."
"The most exciting phrase in science, the one most likely to herald a new discovery, is rarely "Eureka!". More often it's "That's funny. It's not supposed to do that, is it?"."
Labels:
astronomy,
hubble space telescope,
science
Wednesday, September 10, 2008
Large Hadron Collider
They've fired up the LHC today for the first tests. A lot of people have been making noises about the risk of it creating a black hole that'll swallow Earth. I'm sorry, the world won't be ending today.
- Yes, the LHC will be colliding particles with higher energies than anything humans have been able to manage to date. But the Universe isn't human, and has been colliding particles with energies orders of magnitude higher than the LHC's capable of for billions of years. Several such collisions happen here on Earth each year as high-energy gamma rays impact the atmosphere. Given the length of time and the number of events per year, if those collisions would create a black hole that'd last any length of time we'd've seen evidence of it happening before now.
- Any black hole the LHC might create will have only the mass of the particles involved in the collision. That's only going to be a couple of protons worth. Such low-mass black holes emit a large amount of Hawking radiation relative to their mass. And that emitted radiation comes from their mass. Low-mass black holes simply evaporate very quickly (within fractions of a second) after forming. So even if a black hole does get created, it'll disappear again probably before we even know it existed.
- Even if the black hole sticks around, it won't pull in enough to be a problem. Remember, black holes don't have any greater gravitational pull than anything else of the same mass, their only special property is how steep their gravity well is. The danger radius of such a low-mass black hole is going to be a fraction the size of a subatomic particle. It's going to have to hit another subatomic particle almost dead-center to suck in any additional mass. At this scale "solid' matter's 99.999% empty, so it's going to take that black hole a very long time (as in millions of years) to accumulate enough mass to begin to start pulling in matter on a macroscopic scale.
- Ignoring the above, the black hole will have the same velocity vector and momentum as the particles that created it. That velocity'll be well above the Earth's escape velocity, and tangent to it's surface. Any black hole will simply continue in a straight line away from Earth, never to return.
- And even ignoring the previous points, this is the test phase. They've only turned on one beam to calibrate and align things. With only one beam, there aren't going to be any particles colliding. So even if the LHC could create black holes, it won't be creating them today.
Labels:
large hadron collider,
science
Thursday, August 28, 2008
Abit leaving the motherboard market
According to this article, Abit is exiting the motherboard market by the end of this year. Abit's been my preferred motherboard brand for a long time now, with the right mix of interfaces and options for what I want. Now I've got to find another brand that has models with the right mix and a history of reliability. Annoying.
Labels:
hardware
Saturday, August 23, 2008
aseigo on new MS/Novell deal
http://aseigo.blogspot.com/2008/08/microsoft-and-novell-reaffirm-pact.html
aseigo has some comments on MS's new deal with Novell to buy more Linux support coupons. I have to agree with him. One thing that has bothered me with MS's activites is their nebulous claims about their IP that's supposedly infringed upon by Linux. My first reaction is "I'm from Missouri. Show me.". Exactly what intellectual property does Microsoft claim to own that's being infringed upon, and exactly what in a Linux distribution infringes upon it and how? Lay it out and let's get it resolved. And yet Microsoft won't do that. They play coy, dodging around saying exactly what it is they're accusing Linux of. And my immediate reaction to that is to think that they really don't have any claim that'll stand up to public scrutiny, that if they had to actually lay it out all they'd end up with is "We got nuthin'.". And that makes me immediately suspicious of any deal that supports them in this. When someone's running a scam (which is what a false claim to get others to pay you is, a scam), there's only two kinds of people doing business with them: marks, and accomplices. I probably want to avoid both.
aseigo has some comments on MS's new deal with Novell to buy more Linux support coupons. I have to agree with him. One thing that has bothered me with MS's activites is their nebulous claims about their IP that's supposedly infringed upon by Linux. My first reaction is "I'm from Missouri. Show me.". Exactly what intellectual property does Microsoft claim to own that's being infringed upon, and exactly what in a Linux distribution infringes upon it and how? Lay it out and let's get it resolved. And yet Microsoft won't do that. They play coy, dodging around saying exactly what it is they're accusing Linux of. And my immediate reaction to that is to think that they really don't have any claim that'll stand up to public scrutiny, that if they had to actually lay it out all they'd end up with is "We got nuthin'.". And that makes me immediately suspicious of any deal that supports them in this. When someone's running a scam (which is what a false claim to get others to pay you is, a scam), there's only two kinds of people doing business with them: marks, and accomplices. I probably want to avoid both.
Labels:
fud,
intellectual property,
microsoft,
open source
Thursday, August 21, 2008
DMCA: copyright owners must consider fair use
Copyright owners must consider fair use before filing a DMCA takedown notice. The full decision is here. The basic upshot of this is that copyright owners are required to consider whether a use of their material would reasonably be considered fair use under copyright law. The DMCA requires that the copyright owner have a good-faith belief that the use is infringing before they can file a takedown notice, and if the use falls under fair use and a reasonable person would have concluded this beforehand then the "good-faith belief" test fails. That, BTW, leaves the copyright liable for damages and penalties if the target of the notice wants to push it. The downside, of course, is that showing bad faith is a difficult thing to do in court, but still it's nice to have the principle upheld.
The judge says he's not sanguine about the defendant's chances of proving bad faith on the part of the plaintiff. I'm not so sure, at least if the judge is unbiased about it. The infringement in question is a song playing in the background of a baby video posted to YouTube. The Supreme Court has set forth 4 factors to consider in determining fair use: the nature of the use (commercial vs. non-commercial), the nature of the infringed work, the amount and substantiality of the portion used and the effect of the infringement on the potential market for the work. It's going to be very hard for a record label to argue that people are going to put up with watching someone's baby video repeatedly just to save the cost of buying the song. They're also going to have a hard time arguing commercial use, YouTube may put ads on the page but the uploader doesn't get any money from them and has no control over them and the entity that does get the money (YouTube) isn't the one the plaintiff's making a claim against. Even the nature of the copyrighted work works against the label. The work is a song, and it's merely incidental background noise in a video whose point is to showcase the uploader's baby. The only factor that works anywhere near in the plaintiff's favor is the amount of the song audible, and that's countered by the fact that the song's purely incidental background. As I said, it's not likely anyone's going to look at this video mainly for the music, any more than anyone watches a football game mainly to see the advertisements pasted around the stadium. Given all that, if the defendant's got a good lawyer I think they can make a very strong case that plaintiffs couldn't reasonably have believed the use wouldn't meet the qualifications for fair use. And proceeding when you know or should know otherwise is the very definition of bad faith.
The judge says he's not sanguine about the defendant's chances of proving bad faith on the part of the plaintiff. I'm not so sure, at least if the judge is unbiased about it. The infringement in question is a song playing in the background of a baby video posted to YouTube. The Supreme Court has set forth 4 factors to consider in determining fair use: the nature of the use (commercial vs. non-commercial), the nature of the infringed work, the amount and substantiality of the portion used and the effect of the infringement on the potential market for the work. It's going to be very hard for a record label to argue that people are going to put up with watching someone's baby video repeatedly just to save the cost of buying the song. They're also going to have a hard time arguing commercial use, YouTube may put ads on the page but the uploader doesn't get any money from them and has no control over them and the entity that does get the money (YouTube) isn't the one the plaintiff's making a claim against. Even the nature of the copyrighted work works against the label. The work is a song, and it's merely incidental background noise in a video whose point is to showcase the uploader's baby. The only factor that works anywhere near in the plaintiff's favor is the amount of the song audible, and that's countered by the fact that the song's purely incidental background. As I said, it's not likely anyone's going to look at this video mainly for the music, any more than anyone watches a football game mainly to see the advertisements pasted around the stadium. Given all that, if the defendant's got a good lawyer I think they can make a very strong case that plaintiffs couldn't reasonably have believed the use wouldn't meet the qualifications for fair use. And proceeding when you know or should know otherwise is the very definition of bad faith.
Labels:
copyright,
intellectual property,
law
Tuesday, August 19, 2008
Google session vulnerability
At DefCon there was a presentation on a way to hijack a Google Mail session. Google's implemented a new option to counter it, the option to always use SSL. Now, important point: the attack is not the one that sniffs your session cookie if you're using an unencrypted link. That attack can be prevented merely by using SSL all the time. This attack will work even if you use SSL for everything. It works by inserting code on a non-GMail page that'll cause a request to the non-SSL GMail pages, and the browser will send the session cookie in that unencrypted request without you being aware of it. When you use Google's fix, setting your account to always use HTTPS, Google does more than just force you to an "https:" URL. It also wipes your existing session cookie and creates a new one with a flag on it to tell the browser to only send this cookie in secure (HTTPS) requests. This prevents the cookie from being sent in the clear ever.
Wednesday, August 13, 2008
Subscribe to:
Posts (Atom)