Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would genuinely be interested in seeing examples of significant harm from web analytics or behaviorally-targeted advertising. Right now the most common (and frequently made) argument against FTC regulation of these fields is that no one has been able to bring forward an individual that's actually suffered harm, so your comment about "far too many people wind up suffering significant harm in one form or another" makes me cock an eyebrow.

'Mainstream advertising campaigns' for larger publishers are absolutely and completely reliant on third-party tracking and targeting - for frequency capping, serving verification, and demographic targeting. If these tools go away, the branded ad spend stays on television.

The New York Times is the most obvious example of a publisher (and journalism the most obvious sector) that'd be negatively impacted by Do Not Track. They're making significant revenue from their online business right now, but they also have very significant expenses. It costs money to run a news organization capable of international reporting and investigative journalism. 'Minor websites' are not the issue here - it's the major websites that are concerned.

I'm calling it a night, but I'll wrap up with a couple of quotes from the Online Publishers Association (which includes the NYT and every other major American news organizations) comment on the FTC's "Protecting Consumer Privacy in an Era of Rapid Change" preliminary report:

Online publishers should have the right to offer their content and services on any lawful terms that are explicitly communicated to consumers and withhold access from those who do not agree to such terms. To require otherwise would burden publishers’ First Amendment speech with free riders who enjoy the benefits of access to valuable content without providing fair value in exchange.

[D]efault rules that prevent fair value exchanges of digital content for user data could harm consumer welfare by reducing incentives for some publishers to invest in the production of content and/or creating incentives for publishers to charge or charge more for content that they would otherwise make available for free or at a lower cost.



> I would genuinely be interested in seeing examples of significant harm from web analytics or behaviorally-targeted advertising.

I think advertising itself is more of an annoyance than a serious harm in most cases, though I would certainly regard targeting certain profiles with advertising for certain products as abuse. That is mainly where the target is unlikely or unable to make sound judgements, for example where children, adults with learning difficulties, those suffering from a recent emotional trauma, or those who are recognisably not well-informed about things like legal, medical or financial matters are involved.

However, what really worries me is that it's not only advertising that can be driven by this kind of personal profiling, and the effects in other cases can be far greater than the irritation of seeing yet another toy advert because you just uploaded some baby photos.

For example, here in the UK, there was a lot of media attention a couple of days ago, because it looks like car insurers are going to be forced to stop offering different prices to male and female customers just because of their gender. The insurers, of course, have been profiling, and argue that on average young male drivers are more expensive in terms of the accidents they have and the resulting cost. However, while there may be some correlation there, that doesn't imply a causative effect in any individual case, and it doesn't change the fact that there are many safe male drivers who are paying more and many dangerous female drivers who are paying less. Since all drivers are required by law to have insurance in my country, this sort of profiling has effectively meant that many good male drivers have been charged thousands of pounds of basically unescapable tax, just for fitting a naively constructed risk profile.

Is it such a leap to wonder what would happen if health insurance companies were able to start profiling on grounds that were not directly clinically relevant, particularly in countries where private health insurance is the norm?

What about profiling and employer blacklists: sorry, we can't give you the job, because even though you appear on the surface to be an excellent and highly qualified candidate, we've analysed your friendship network and several of your regular contacts have photos up on Facebook that our automated analysis software thinks show them being excessively drunk, which means that statistically there is a relatively high chance of you also having your work performance impaired for alcohol-related reasons. Oh, and just to save you some time, don't bother applying for any other jobs where your hard-earned specialist skills and useful experience would be relevant, because we know that the other four big name employers all check the same databases we do.

> I'm calling it a night, but I'll wrap up with a couple of quotes from the Online Publishers Association

As far as I'm aware, no-one is saying that publishers can't offer content on their own terms. The publishers will simply have to be transparent and up-front about what those terms really are now, and compete accordingly. Moreover, where there are monopolies or essential services involved, consumer protection regulation may be warranted in the same way that state-sanctioned monopolies, such as our railway and postal networks, are sometimes subject to pricing constraints dictated to them other than by market forces.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: