Highlights for Surveillance Valley

The people gathered at city hall that night saw Oakland’s DAC as an extension of the tech-fueled gentrification that was pushing poorer longtime residents out of the city.
the Internet was developed as a weapon and remains a weapon today. American military interests continue to dominate all parts of the network, even those that supposedly stand in opposition.
An even more disturbing dimension of the AIR’s pacification work in Thailand was that it was supposed to serve as a model for counterinsurgency operations elsewhere in the world—including against black people living in American inner cities, where race riots were breaking out at the time.
He began to see that in a society mediated by computer and information systems those who controlled the infrastructure wielded ultimate power.
Where Wiener saw danger, Lick saw opportunity. He had no qualms about putting this technology in the service of US corporate and military power.
Indeed, intelligence agencies were among the first users of the tools ARPA’s command and control program produced just a few years later.
Like many upper-class Americans of his day, North worried that the massive influx of immigrants from Europe was destroying the fabric of American society, causing social and political unrest, and threatening the nation’s racial purity.47 This fear of immigration would become intertwined with anticommunist hysteria, leading to repression of workers and labor unions across the country. North saw statisticians like himself as technocratic soldiers: America’s last line of defense against a foreign corrupting influence. And he saw the tabulator machine as their most powerful weapon.
Deemphasizing ARPA’s military purpose had the benefit of boosting morale among computer scientists, who were more eager to work on the technology if they believed it wasn’t going to be used to bomb people.
Fliers posted on both campuses railed against “computerized people-manipulation” and “the blatant prostitution of social science for the aims of the war machine.”
Pool saw computers as more than just apparatuses that could speed up social research. His work was infused with a utopian belief in the power of cybernetic systems to manage societies. He was among a group of Cold War technocrats who envisioned computer technology and networked systems deployed in a way that directly intervened in people’s lives, creating a kind of safety net that spanned the world and helped run societies in a harmonious manner, managing strife and conflict out of existence.
The language of Licklider’s proposal—talk about propaganda and monitoring political movements—was so direct and so obvious that it could not be ignored. It confirmed students’ and activists’ fears about computers and computer networks and gave them a glimpse into how military planners wanted to use these technologies as tools for surveillance and social control.
Today, people still think that surveillance is something foreign to the Internet—something imposed on it from the outside by paranoid government agencies. Rowan’s reporting from forty years ago tells a different story. It shows how military and intelligence agencies used the network technology to spy on Americans in the first version of the Internet. Surveillance was baked in from the very beginning.
Indeed, the army referred to activists and protesters as if they were organized enemy combatants embedded with the indigenous population.
In the 1990s the country was ablaze with sweeping religious proclamations about the Internet. People talked of a great leveling—an unstoppable wildfire that would rip through the world, consuming bureaucracies, corrupt governments, coddled business elites, and stodgy ideologies, clearing the way for a new global society that was more prosperous and freer in every possible way.
Kevin Kelly, a bearded evangelical Christian and Wired editor, agreed with his boss: “No one can escape the transforming fire of machines. Technology, which once progressed at the periphery of culture, now engulfs our minds as well as our lives. As each realm is overtaken by complex techniques, the usual order is inverted, and new rules established. The mighty tumble, the once confident are left desperate for guidance, and the nimble are given a chance to prevail.”
Brand disagreed. In a long article he filed for Rolling Stone, he set out to convince the magazine’s young and trend-setting readership that ARPA was not some big bureaucratic bummer connected to America’s war machine but instead was part of an “astonishingly enlightened research program” that just happened to be run by the Pentagon.
Brand was deeply embedded in California’s counterculture and appeared as a major character in Tom Wolfe’s The Electric Kool-Aid Acid Test. Yet there he was, acting as a pitch man for ARPA, a military agency that had in its short existence already racked up a bloody reputation—from chemical warfare to counterinsurgency and surveillance. It didn’t seem to make any sense.
Brand took a different path. He belonged to the libertarian wing of the counterculture, which tended to look down on traditional political activism and viewed all politics with skepticism and scorn.
Neuromancer coined the term cyberspace. It also launched the cyberpunk movement, which responded to Gibson’s political critique in a cardinally different manner: it cheered the coming of this cyber dystopia.
Leverage is a good word for Kelly’s sudden religious inspiration. His faith in God matched his faith in the power of technological progress, which he saw as a part of God’s divine plan for the world. Over the years, he developed the belief that the growth of the Internet, the gadgetization and computerization of everything around us, the ultimate melding of flesh and computers, and the uploading of human beings into a virtual computer world were all part of a process that would merge people with God and allow us to become gods as well, creating and ruling over our own digital and robotic worlds just like our maker.
At Wired, Kelly injected this theology into every part of the magazine, infusing the text with an unquestioning belief in the ultimate goodness and rightness of markets and decentralized computer technology, no matter how it was used.
It seemed more a networking hub and marketing vehicle for the industry, a booster intended to create a brand around the cult of technology and the people who made and sold it, and then repackage it for the mainstream culture. It was continuing a tradition that Stewart Brand had started, overlaying an increasingly powerful computer industry with images of the counterculture to give it a hip and grassroots revolutionary edge.
Wired’s impact was not just cultural but also political. The magazine’s embrace of a privatized digital world made it a natural ally of the powerful business interests pushing to deregulate and privatize American telecommunications infrastructure.
John Malone, the billionaire cable monopolist at the head of TCI and one of the largest landowners in the United States, made the cut as well. Wired put him on the cover as a punk counterculture rebel for his fight against the Federal Communications Commission, which was putting the brakes on his cable company’s multi-billion-dollar merger with Bell Atlantic, a telephone giant. He is pictured walking down an empty rural highway with a dog by his side, wearing a tattered leather jacket and holding a shotgun. The reference is clear: he was Mel Gibson of Road Warrior, fighting to protect his town from being overrun by a savage band of misfits, which, to extend the metaphor, was the FCC regulators. The reason this billionaire was so cool? He had the guts to say that he’d shoot the head of the FCC if the man didn’t approve his merger fast enough.
That’s where Wired’s real cultural power lay: using cybernetic ideals of the counterculture to sell corporate politics as a revolutionary act.
Brand saw computers as a path toward a utopian world order where the individual wielded the ultimate power. Everything that came before—militaries, governments, big oppressive corporations—would melt away and an egalitarian system would spontaneously emerge.
People treated the search box as an impartial oracle that accepted questions, spat out answers, and moved on. Few realized it recorded everything typed into it,
The book demonstrates that Page and Brin understood early on that Google’s success depended on grabbing and maintaining proprietary control over the behavioral data they captured through their services. This was the company’s biggest asset.
One thing was certain in the wake of the AOL release: search logs provided an unadulterated look into the details of people’s inner lives, with all the strangeness, embarrassing quirks, and personal anguish those details divulged. And Google owned it all.
Taken together, these technical documents revealed that the company was developing a platform that attempted to track and profile everyone who came in touch with a Google product. It was, in essence, an elaborate system of private surveillance.
The language in the patent filings—descriptions of using “psychographic information,” “personality characteristics,” and “education levels” to profile and predict people’s interests—bore eerie resemblance to the early data-driven counterinsurgency initiatives funded by ARPA in the 1960s and 1970s.
There was only one difference: instead of preventing political insurgencies, Google wanted the data to sell people products and services with targeted ads. One was military, the other commercial. But at their core, both systems were dedicated to profiling and prediction. The type of data plugged into them was irrelevant.
The truth is that the Internet came out of a Pentagon project to develop modern communication and information systems that would allow the United States to get the drop on its enemies, both at home and abroad.
All these CIA-backed companies paid Facebook, Google, and Twitter for special access to social media data—adding another lucrative revenue stream to Silicon Valley.
From their inception, Internet companies banked heavily on the utopian promise of a networked world. Even as they pursued contracts with the military and their founders joined the ranks of the richest people on the planet, they wanted the world to see them not just as the same old plutocrats out to maximize shareholder value and their own power but also as progressive agents leading the way into a bright techno-utopia.
Snowden’s views on private surveillance were simplistic, but they seemed to be in line with his politics. He was a libertarian and believed the utopian promise of computer networks. He believed that the Internet was an inherently liberating technology that, if left alone, would evolve into a force of good in the world. The problem wasn’t Silicon Valley; it was government power.
The cypherpunk vision of the future was an inverted version of the military’s cybernetic dream pursued by the Pentagon and Silicon Valley: instead of leveraging global computer systems to make the world transparent and predictable, cypherpunks wanted to use computers and cryptography to make the world opaque and untrackable. It was a counterforce, a cybernetic weapon of individual privacy and freedom against a cybernetic weapon of government surveillance and control.
I was puzzled, but at least I understood why Tor had backing from Silicon Valley: it offered a false sense of privacy, while not posing a threat to the industry’s underlying surveillance business model.
While couched in lofty language about fighting censorship, promoting democracy, and safeguarding “freedom of expression,” these policies were rooted in big power politics: the fight to open markets to American companies and expand America’s dominance in the age of the Internet.51 Internet Freedom was enthusiastically backed by American businesses, especially budding Internet giants like Yahoo!, Amazon, eBay, Google, and later Facebook and Twitter. They saw foreign control of the Internet, first in China but also in Iran and later Vietnam, Russia, and Myanmar, as an illegitimate check on their ability to expand into new global markets, and ultimately as a threat to their businesses.
China saw Internet Freedom as a threat, an illegitimate attempt to undermine the country’s sovereignty through “network warfare,” and began building a sophisticated system of Internet censorship and control, which grew into the infamous Great Firewall of China.
The correspondence left little room for doubt. The Tor Project was not a radical indie organization fighting The Man. For all intents and purposes, it was The Man. Or, at least, The Man’s right hand.
Despite Tor’s public insistence it would never put in any backdoors that gave the US government secret privileged access to Tor’s network, the correspondence shows that in at least one instance in 2007, Tor revealed a security vulnerability to its federal backer before alerting the public, potentially giving the government an opportunity to exploit the weakness to unmask Tor users before it was fixed.
From a higher vantage point, the Tor Project was a wild success. It had matured into a powerful foreign policy tool—a soft-power cyber weapon with multiple uses and benefits. It hid spies and military agents on the Internet, enabling them to carry out their missions without leaving a trace. It was used by the US government as a persuasive regime-change weapon, a digital crowbar that prevented countries from exercising sovereign control over their own Internet infrastructure. Counterintuitively, Tor also emerged as a focal point for antigovernment privacy activists and organizations, a huge cultural success that made Tor that much more effective for its government backers by drawing fans and helping shield the project from scrutiny.
Most people involved in privacy activism do not know about the US government’s ongoing efforts to weaponize the privacy movement, nor do they appreciate Silicon Valley’s motives in this fight. Without that knowledge, it is impossible to makes sense of it all.
In 2015, when I first read these statements from the Tor Project, I was shocked. This was nothing less than a veiled admission that Tor was useless at guaranteeing anonymity and that it required attackers to behave “ethically” in order for it to remain secure.
The old cypherpunk dream, the idea that regular people could use grassroots encryption tools to carve out cyber islands free of government control, was proving to be just that, a dream.
Silicon Valley fears a political solution to privacy. Internet Freedom and crypto offer an acceptable alternative. Tools like Signal and Tor provide a false solution to the privacy problem, focusing people’s attention on government surveillance and distracting them from the private spying carried out by the Internet companies they use every day. All the while, crypto tools give people a sense that they’re doing something to protect themselves, a feeling of personal empowerment and control. And all those crypto radicals? Well, they just enhance the illusion, heightening the impression of risk and danger. With Signal or Tor installed, using an iPhone or Android suddenly becomes edgy and radical. So instead of pushing for political and democratic solutions to surveillance, we outsource our privacy politics to crypto apps—software made by the very same powerful entities that these apps are supposed to protect us from.
So instead of pushing for political and democratic solutions to surveillance, we outsource our privacy politics to crypto apps—software made by the very same powerful entities that these apps are supposed to protect us from.
The IBM machines themselves did not kill people, but they made the Nazi death machine run faster and more efficiently, scouring the population and tracking down victims in ways that would never have been possible without them.
But not all control is equal. Not all surveillance is bad. Without them, there can be no democratic oversight of society.
By pretending that the Internet transcends politics and culture, we leave the most malevolent and powerful forces in charge of its built-in potential for surveillance and control. The more we understand and democratize the Internet, the more we can deploy its power in the service of democratic and humanistic values, making it work for the many, not the few.

Tweet coverage of the 2016 Bot Summit at the V&A in London

I was at the 2016 Bot Summit in London a couple of weeks ago. I did my best to capture salient points from every talk in a tweet. Here are all of them in order.

OSM Live Edit Screensaver

I’ve been running a live open streetmap edits view as a screen saver for a couple of years now and it never fails to draw the attention from people in the room (whether they know what OSM is or not). The OSM visualization is pretty cool and really comes to life when it is displayed full screen. It is also a great way to see a part of the world you might not have known existed. I used to browse atlases when I was a kid, so this is me indulging in virtual travel again.

Will attended me to the fact that I shot a video of it but I never wrote up the super basic process behind it, so here goes.

What it looks like:

This must have been the tweet by Thomas that started it all in early 2013.

After I read that I fiddled around a bit with making my own screensaver in XCode. That seems simple enough but building stuff on OS X is a bit of a pain if you’re used to iOS and definitely not something you’ll be able to finish in an hour or so. It turns out that there is a far far easier way.

  1. Install the webviewscreensaver. Thanks to Alastair Tse.
  2. Plugin the URL to the live OSM view into the screensaver. This one: http://osmlab.github.io/show-me-the-way/
  3. Enjoy.

Reading more about the social

As of right now I’m frightfully behind on my Latour MOOC. What I have been doing instead is reading up old articles in my Instapaper. One such is this interview with Dutch sociologist/philosopher Willem Schinkel in Vrij Nederland. It’s good to read a fresh Dutch thinker who seems to understand things (and who also is in with Latour). Calling Geert Wilders a proto-fascist and the Netherlands a museum are only a couple of the ringers in there.

The disappointing bit came at the end where he confessed to not having a cell phone out of principle. This is a terrible bit of intellectual laziness which brings me to this point on Sloterdijk by Adam Greenfield which rings true:

The task before us is to discover, or invent, a politics, a mobility and a conviviality that are both authentic to the circumstances in which we find ourselves and capable of giving full expression to the emancipatory potential that remains latent and unrealized in our networked technologies.

Week 322

So it turns out I’ve fallen immensely behind with the weeknotes over here, but we did start writing them at the new office now, which should make up for something. Those live at http://kantberlin.tumblr.com/ currently.

What happened that week was a bunch of work and getting a desk from IKEA to work on at the new place:
Upgraded to a small-ish roller desk

The Möbeltaxi driver took us on an interesting shortcut through the old service tunnels of Tempelhof —I am amazed that Moves tracked it as well as it did— which might be fun to do some urban exploring in at some point:
Secret route through the service tunnels of the old airport. Useful when there is traffic which is more or less always.

Back then we were still drinking some horrible leftover coffee brewed in a two step process:
The morning press and filter

And I had a talk for at Bits of Freedom that I sketched out on our brilliant new whiteboard:
I gave the new whiteboard a proper exercising for an upcoming talk

I promised the people future shock and I think I delivered that to some extent.

Don’t release anonymized datasets

There is no thing as an anonymized dataset. Anybody propagating this idea even tacitly is doing a disservice to the informed debate on privacy. Here’s a round up with some recent cases.


Just today Berlin visualization outfit Open Data City published a visualization of the devices that were connected to their access points during the Re:publica conference earlier this month. The visualization is a neat display of the ebb and flow of people in the various rooms during the event.

It is also a good attempt to change the discourse about data protection in Germany. The discourse tends to be locked in the full stop stance where absolutely ‘nothing is allowed’ without a ton of waivers. Because of that hassle, a lot of things which could be useful are not implemented. A more relaxed approach and a case by case decision on things would be better. In the case of Re:publica there does not seem to be any harm in making this visualization or in releasing the data (here find it on Fusion Tables where I uploaded it).

What I find to be a disservice to the general debate is the application of ‘pseudonymized’ data where the device ids have been processed with a salt and hash. The identifying characteristics have been removed but the ids are still linked across sessions making it possible to link identities with devices and figure out who was where exactly when during the conference.

To state again: at a professional conference such as Re:publica there would in all likelihood be no harm done if the entire dataset would be de-anonymized. The harm done is the pretense that processing a dataset in this way and then releasing it with the interlinkage across sessions is a good idea.

Which brings me to my next point.


Yesterday the Dutch company, Equens, that processes all payment card transactions announced a plan to sell these transactions to stores. Transactions would be anonymized but still linked to a single card. This would make it trivial for anybody with a comprehensive secondary dataset (let’s say Albert Heijn or Vodafone) to figure out which real person belongs to which anonymized card. That last fact was not reported in any of the media coverage of this announcement which is also terrible.

After a predictable uproar this plan was iced, but they will keep on testing the waters until they can implement something like this.

Today Foursquare released all real-time checkin data but with suitable anonymization. They publish only the location, a datetime and the gender of the person checking in. That is how this should be done.

License plates

Being in the business of opening data we at Hack de Overheid had a similar incident where a dataset of license plates was released where the plates had been md5’ed without a salt. This made it trivial to find out whether a given license plate was in that dataset.

This was quickly fixed. Again this is not a plea against opening data —which is still a good idea in most cases— but a plea for thinking about the things you do.

AOL search data

The arch-example of poorly anonymized search data is of course still the AOL search data leak from back in 2006. That case has been extensively documented, but not extensively learned from.

Memory online is frightfully short as is the nature of the medium but it becomes annoying if we want to make progress on something. Maybe it would be better altogether to lose the illusion that progress on anything can be made online.

For the privacy debate it would be good to keep in mind that the increasingly advanced statistical inference available means that almost all anonymization is going to fail. The only way around this is to not store data unless you have to or to accept the consequences when you do.

Who owns the future?

In Conversation: Jaron Lanier and James Bridle On Who Owns the Future? from The School of Life on Vimeo.

I have just watched the above conversation between Jaron Lanier and James Bridle in Conway Hall organized by the School of Life. The event was to mark the occasion of Lanier’s new book “Who Owns The Future?” (Guardian review) and the conversation focused on some interesting ideas from it. I will probably not read the book itself, but I think the things said in the video above can be taken by themselves and though they are provocative they do not motivate me to give Lanier any money.

The main issue is that Lanier signals some interesting problems (He’s not alone. Om Malik just posted this about Data Darwinism), he makes some terrible comparisons and posits solutions that are wholly unconvincing.


Laniers big idea is that those with the biggest computers on the network (and the largest collection of brains to program those computers) are in danger of becoming the rentiers of big data. They will be able to out-compute everybody else and figure out what Gibson called the ‘order flow’ in his Blue Ant trilogy: the best set of actions given the circumstances.

That is an interesting if not exactly novel idea. It serves as a jumping off point into some outright crazy ideas about intellectual property. Lanier compares the contraction created by the current austerity measures with what is happening in the music industry. This is a ridiculous comparison. Even if it did hold, then whatever is happening is an overdue correction to a situation that was unsustainably overleveraged.

In the same vein he waves around the scarecrow that ‘the economy will shrink’. A notion that will undoubtedly play well with the same audience that is inclined to buy his book. Rhetoric about shrinking economies is almost always a phantom. Economic shrinkage may very well be in our near future and does not necessarily need to be a bad thing.

Lanier’s point that people are forced into an informal economy is valid but it speaks more to the failure of social systems than anything else. The social democratic contract that may be inconceivable for Americans is working quite well in Europe. It may need updating both for changing demographics and the digital age, but I don’t think many people here would trade it for what Lanier is peddling. Like I mentioned in my data tax post, we don’t have the problem of musicians who can’t pay their medical bills.


The proposed solutions are even more problematic (though if you’re so inclined you might term them ‘thought provoking’).

Lanier seems overly influenced by the music industry and by the concept of private copyright. I would assert that the music industry with its track record is not something worth emulating. The sky is not falling in the music industry. They are facing a long overdue re-evaluation of their social contract because their carrier of value has lost its excludability. There are still lost of people making music and thriving.

Lanier seems to roughly comprehend how a just society should work: ‘For society to be democratic, income needs to be distributed in a way that is roughly a bell curve.’ but at the same time he seems to be confused how it should be implemented: ‘Socialism needs to be off the table in the information age.’

The bidirectional reference networks that Lanier proposes that preserve the context and provenance of data sound fantastic. There are however real reasons why we are doing the ‘profoundly dumb thing we are doing’ instead. His network sounds awfully similar to the idea of the semantic web, where everything online will work perfectly if only we would do it The Right Way (which we of course never will).

His solution to ‘Become as aware as possible of how you fit in other people’s computation schemes.’ is a good idea. It is the same algorithmic literacy pointed to in work by Kevin Slavin, Douglas Rushkoff and James Bridle himself.

I’m afraid that Lanier’s rhetoric of a ‘more honest accounting’ will play particularly well in Germany where similar words are already being used to take Google to court. Germany passed a Leistungsschutzrecht (ancillary copyright for publishers) because they figured out that large American companies were making outlandish amounts of money based on the work of large German publishing houses.

The conversation of a fair distribution of wealth in a power-law based networked economy is one we need to have. I doubt though if this particular book is a good starting point for such a conversation. Lanier’s cultural foundations point us towards a solution that is at best unrealistic and tries to extrapolate the problematic private notion of copyright to society as a whole.

The data tax I wrote about yesterday is an approach from a more public point of view. That would focus more on personal data and the revenue generated from such a tax would go into government so it would be subject to democratic controls. Ideas that won’t fly well with Lanier’s Silicon Valley crowd, but maybe that’s all the better.

Taxing data is not crazy

There are some interesting similarities between a recent proposal commissioned by the French government and the book out by Jaron Lanier just now “Who Owns The Future?”

Both analyses signal the dominance of corporate actors in a big data world and both suggest new methods of taxation as a potential solution to the problem. An article over at Forbes explains the commission’s proposal by Nicolas Colin and makes a lot of sense.

The French report has been received with predictable knee-jerk responses across the tech world. It is true that governments have not been very good at regulating the internet. But not regulating the internet is not a solution. We could hope for representation that is competent when it comes to the digital world.

The companies that create the internet should not cry foul. They have a track record of evading taxes more than contributing their fair share back to society.

I’ll tackle Lanier’s position in another post. I just watched the conversation he had with James Bridle in Conway Hall and noticed some errors in Lanier’s ideas: they require a fully functional semantic web, they seem overly informed by private copyright practice and complementarily they take a weak government for granted.

How you would enforce such a law is another question entirely, but it cannot go further off the mark than how large companies manage to evade taxes right now. It may in fact be a lot fairer to tax data at the point of collection/use.

If you don’t bother to read the article above, I can sum it up in two key points below:

Data is hazardous waste material and as such its production and storage should be discouraged (the CO2 tax was given as an example in the Forbes article). Cory Doctorow compared personal data breaches to nuclear disasters, because the fallout is so tremendously hard to contain and control. Whoever collects large amounts of personal data treats the privacy damage caused by breaches as an externality. As such the storage of such data should be discouraged with a tax.

Data is capital and should be taxed as all capital is. Storage, mining and arbitrage using data can generate revenue for sophisticated market actors (those that Lanier terms as those with ‘the biggest computer on the network’). Data is a value adding asset that generates wealth and more data for those who already have it. If we don’t want a situation where a small group of people get richer at the expense of everybody else, we should tax it.

So data is both capital and hazardous. We tax many things with either of those properties so we should definitely tax something that has both.

Hosting on Heroku with functioning MX records

It seems to be not completely obvious how to host a website on heroku while at the same time also maintaining e-mail delivery. You would think that this is a very common situation and it would be well documented but unfortunately it is not.

We got a DNSimple account because that’s the way that heroku allows naked domains to function. DNSimple sets up the ALIAS record for you rather easily, but what it doesn’t do is warn you if you have both MX and CNAME records on something. What happens is that the CNAME record always takes precedence as a redirect so your e-mails are then routed to proxy.heroku.com. Something that is undesirably and that DNSimple should warn against.

What turns out to be the best solution is to set ALIAS records for both your apex domain and your subdomains (as proposed here). This way you don’t need a CNAME record anymore that can interfere with other settings. Heroku in their documentation advise you to use a CNAME record, so I’m going to ask them if there are any problems with using an ALIAS for all web routing.

The other option would be to purchase another plan for Zerigo which seems to be heroku’s preferred solution for this issue right now. Again this is rather poorly documented and we would have liked to be informed about that before we chose for the DNSimple option.

Update: Heroku replied with the following.

Great question. The ALIAS record, created by DNSimple, is basically a bunch of magic that does a combination of what CNAMEs and A Records do, but does it behind the scenes. You can read more about the ALIAS records here: http://blog.dnsimple.com/zone-apex-naked-domain-alias-that-works/

That said, DNSimple would likely be better quipped to answer a question like this. I don’t see any reason why you couldn’t use ALIAS records in place of CNAMEs. There might be a slight difference in performance between the two, but I’m not certain enough about that to say for sure.

After which I asked the same question over at DNSimple on their blog. That comment is awaiting moderation and an answer but I’ll post that here as soon as it appears.

Watersnake, a simple voting app

My small project during Swhack was to create a django version of a delegated voting system partially inspired by Liquid Feedback and the manyfold problems that system has. In particular that it is written on such an esoteric stack that it is near impossible to get running without root on a Linux machine and let’s not even discuss the maintenance. What is even worse is that it makes it nearly impossible for outsiders to join the project and contribute to it significantly.

In this interview about Liquid Democracy you can read quite clearly how the technical mandate drives the direction of the project. Something that may not be very desirable if you think of it as a democracy-centric issue and not a technology-centric one.

So to see how hard it would be to write something similar in vanilla django. It’s easy to hate on django but you can find tons of people who can work on this in just about every major city, the framework and the documentation are mature and many parts of the framework can be called excellent.

I thought putting something together that at its core implements a delegated voting engine should be doable in an afternoon and it was. What took the most time was playing around with the settings of the testrunner which I hadn’t really used before. So the watersnake app in this project does majority voting on single proposals with support for delegation.  To see it work you have to run the tests, but building this out into a full fledged (web) app that can be deployed to heroku with a single command is technically trivial (and also time consuming).

This wasn’t a stretch to implement right now because I’m also doing some other projects which border on collaborative writing/decision making/filtering. As always, technology is neither the problem or the solution, but certain technical systems grant different socio-technical affordances than others. I will probably not work on this unless there is a clear demand, but I thought it would be useful to debunk the idea that building such a system needs to be difficult or complex.