Can Technology Help?
Despite Bereners-Lee’s diatribe against sites like Facebook, a bigger threat to his vision of a free and open web is government regulation and censorship. Berners-Lee himself wrote at length about this threat in his book Weaving the Web: Origins and Future of the World Wide Web (1999). His vision is of a web that is mostly self-policing. It’s a place where the democratic process of users voting for the sites they like with their traffic and engagement defines the process, rather than government laws.
I have always thought the notion that the net would police itself was a bit naïve. I was editor of ComputerUser magazine when a lot of kids’ education sites had to shut down as the dot-com bubble burst. What happened to all their domains, which kids had bookmarked all over the world? Porn vendors bought them, specifically to prey on innocent kids. Because of this practice, I proposed an Internet red-light district with the .xxx top-level domain. Parents and school computer lab techies could use filtering software to block any site with a .xxx domain. Meanwhile, governments could then regulate the use of illicit content outside of the .xxx domain, similar to the ratings we use in the U.S. for movie or game content.
After a decade of letter-writing and other advocacy with ICAAN (the Internet domain governing body), I am happy to say the .xxx domain is now a reality. There still is plenty of work to do on the rating system and putting teeth into laws that criminalize targeting illicit content towards children. But we are moving in the right direction. Perhaps some such system could be used for gray-hat advertisers?
The problem with gray-hat ad tracking is consumers often have to get bitten by a malware bug before they can determine whether or not to trust an adware tracking cookie. Perhaps a system like the one used on Facebook would work, which lets Facebook users indicate whether ads are uninteresting, misleading, offensive, repetitive, or other. Right now, Facebook likely sends this data to their advertisers in addition to adding the data to some targeting database. But what if they surfaced the data in aggregate to their users?
Perhaps data like this can be used to create a simple rating system based on the percentages of users who like ads or marked them for deletion based on the four factors Facebook asks for. If web ads had ratings attached to them, users could know how they’re rated before they accept the cookie. And this rating system would also help advertisers and publishers. This user data can not only be a boon to consumers but to advertisers as well.
When the owners of the site from which I was attacked (a major newspaper site) found out one of the ads in its network launched a piece of malware that forced thousands of its users to take their computers in for expensive repairs (not to mention untold sensitive data loss), the site had to issue a formal apology. They were lucky not to get sued in a class action.
Sites that serve ads from large ad networks have little advanced knowledge of the ads their partners will serve on their sites. If the newspaper site in question had known about the malware, it could have prevented serious risk to itself and its users by simply declining to serve the ad on its site. This is another benefit to ad ratings. Sites could set filters, only allowing highly rated ads on their sites. This might inhibit some revenue initially, but it will become a competitive advantage. Users will prefer sites that only serve highly rated ads. Sites can then use this advantage to gain more traffic, more ad clicks, and more revenue down the road.