Don’t Pass Go (go directly to PRISM)?

The genie is out of the bottle, Obama is checking your email, and you can’t push toothpaste back into its blablabla. Welcome to our present defeatist state of surveillance.

The NSA is reading your email? Well, so do Mark Zuckerberg and the Google Bros. And, look at your inbox stats: most likely, they are putting in more effort into this task than you do. OK. Game over, don’t pass go, we all go directly to Prism.

Not so fast. This online world of ours is still in its very early stages. I’m dabbling around there since 1992 or such. As online years count like dog years, I should be about 170 years old (just like Ray Kurzweil, when his supply of dietary enhancements finally runs out). But what are a couple of centuries, if we put things into perspective. Between the invention of democracy (Athens, 500 BCE) and it’s fairly widespread adoption in the late 20th century you can count more than two millenia of feudalism, absolutism, and other -isms. Widespread alphabetism took even longer, took a hit with the invention of tv to finally resurge with the Internet.

Massive societal changes do not happen over night. If not induced by a catastrophy like an asteroid wiping out the dinosaurs (welcome, mammal), a massive war clearing the path for independence day, or 9/11 (good bye nail clippers on air planes, hello total surveillance for safety).

Which leads us back to our current sadly defeatist state of the webs. Let’s put one thing clear: the massive collection of individual data is not a recent bug. It’s a feature of every digital system, where processing power is constantly on the rise and the cost of storing data falling on an hourly base.

The core question is: who owns this data? Who’s allowed to toy around with it? If my baker or corner super market knows about my eating habits, it’s quite OK and helpful. If some secret entity concludes, that because I prefer Halal Döner, my travel patterns should be monitored, we’re entering a very troublesome area.

For quite some time, we mostly did choose to ignore those ramifications of our digital lifes. Or, to put it like that: some proposed a happy hippie hippo lala-land, a united nations of onliners, where the evil forces of meatspace can be safely ignored. Others preferred an Ayn Randian powerplay to achieve the status of robber baron of the virtuality. And, to be sure, military and governement didn’t fall asleep at the wheel. Let’s not forget: the early stages of the Internet was funded by the department of defense. And the global rollout of the Internet steamrolled all national online plays, like France’s Minitel or the German Btx.

In a recent blog post, Emin Gün Sirer, associate professor at Cornell, did something quite overdue: he named the three main stakeholders of our online world.
His three force vectors are Military/Political, Commerce, and the Public. And he tries to calculate a “back of th envelope” vector sum. His rough guesstimate: “the forces are aligned in the ratio 1:1:3, with an alliance of the public and commercial interests that overpowers the M/P establishment in favor of transparency and online privacy guarantees.”

So all’s wrapped up and fine? Most likely, not. Civil rights and liberties are not a product of absolute vecorizable powers, but something you have to work hard for, you have to stand up to (or sit down, as Rosa Parks and Ghandi did).
And, in this case, it’s a very complicated reconcilatory process.
– Government, law enforcement and intelligence services are national entities (except the black UN helicopters, of course, which are after tin foil hat wearing free Americans roaming through Wyoming). There are some very good reasons, why those agencies sometimes should be allowed to wiretap certain individuals or entities. As there are very good reasons why a total surveillance state, a big data GDR on digital stereoids is definitely a thing to avoid. And, let’s not forget: the western democratic idea of government and checks and balances means mostly, that the citizens of a given country are the ones form which all power derives (and that he sometimes has to check, if the balance is still OK).
– Commerce is mostly global. But still bound to national or supranational regulations. If those regulations enforce companies to share their big consumer data with national government agencies (which share their data with some partners in the international intelligence community, because, you know, sharing is caring and NSA and BND do take care for you), they should, in their very own interest, work against this pressure. Because loosing trust means, sooner or later, loosing market share.
– And, finally: we, the people, we are everywhere. We are the one, who have to take care of our governments and our corporations. It took some millenia to gain some liberties. And it’s fairly easy to loose them all. Either against rogue (united) states, or by ceasing to much ground to nicely colored companies.

Publishing is Social Media

After having had a talk with one of our German homegrown publishing tycoons, my friend Ibo posted a comment on his Facebook, which lead to a lively discussion: the publisher did complain a bit about Google, sounding quite awestruck at the same time. How they delievered all those world-changing innovations, and how the publishers lost their grip.

Then, Ibo asked the guy about his Social Media budget for 2013. The publisher says: north of 300k€.
Which would be a lot of money to spend on, let’s say a toddler’s birthday bash. But maybe not as the Social Media budget for a publishing powerhouse. Or maybe it’s just fine. Or maybe Social Media is overrated anyway. So the online discussion goes back and forth.

But seriously, the scary thing is: as a news and magazine publisher, he should be already heavily invested in Social Media. His core business is to enable social communications. Only his technology stack seems a bit outdated: it scales nicely up, but not really down to the individual level.

See, publishers are not an editorial office with a print shop attached. Mostly, they feel other way round: a printing business, with extra value added by employing some pricey editors.
For both perspectives, the outlook is rather grim. If all you can contribute to society is either a pile of printed matter (the latter) or a dedicated staff of n producing a pile of paper on a regular basis (the former), you are already off track.

Publishing is the business of sounding off and shaping a public’s opinion. Sounds an awful lot like what you can accomplish with the online tools of the trade.
And the sad truth is: publishing houses are technologically challenged Social Media providers with an identity problem.

The medium is the package

I like ebooks. Sometimes. For certain occasions or reads. This summer, for instance, I didn’t schlep a ton of paper bricks to the Greek island where we spent our family vacation. Instead, I loaded a lot of fine books onto my iPad. As I neither waterproofed the iPad nor found a working cooling solution for the beach, I still had to bring some paperbacks.

Sometimes, I choose the ebook over print for other reasons. Let’s take Debt: The First 5.000 Years, the unlikely pageturner by anthropologist/anarchist David Graeber. As Amazon Germany told me, I would have to wait for 4-6 weeks until it was ready to ship. The ebook was an easy choice. Zap – instant gratification.

But I still might get myself the print edition.
Why would I do that: duplicating content, if content is really king and all that matters?

The easy answer goes like this: I grew up with printed books. Case closed. True, socialization makes a difference. But having been math-socialized with a LED pocket calculator by Texas Instrumentsdidn’t prevent me from preferring Excel or nowadays Google spreadsheets for my calculations. I’m not a pure-bred Luddite. My music is not on vinyl or MCs, I treat CDs as a MP3 backup.

Book wannabe.

So what’s the thing with books and reading? As we haven’t reached the age of fully billable telepathy yet, any aspiring author will have to write down his thoughts and constructs. That’s a good start. But to reach any readers, he now has to replicate his opus magnum. A cloisterful of monks can accomplish this job quite nicely, a printing press will speed up the whole thing, and the Interwebs brought me Graeber’s Debt in light speed.

But in any case: to replicate, we need a carrier medium. And not all media are created equal. Production cost, durability, and usability may vary. Many 5.000 year old Sumerian clay tablets are still around and even pretty readable (if you happen to be fluent in cuneiform). They should get an A+ for durability. Maybe a  bit later, the neighboring Egyptians switched to papyrus as their medium of choice. Most likely not because they ran out of clay, but because of the superior usability if you want to write down more than some bookkeeping notes or an ancient tweet. As the Egyptians held on to their papyrus monopoly, the others were drawn to the parchment. Which, as it turned out, was not as lightweight and snazzy, but way more durable. Then the Chinese invented paper, Gutenberg the movable types, and so on and so on.

It took a while. But the modern printed book is a rather fascinating device. A dedicated handheld reader with a high resolution display, offering random access to its content. It’s easy to grab and hold. It’s rather sturdy (please do not drop your Kindle from a four storied building). It’s nice to look at and comes in manifold distinctive packagings, from cheap and colorful throwaway to leather-encased monolith.

Easily customizable spatial access to content.

Compared to this, the ebook is rather bland. Of course you can judge a printed book by its cover. Just compare this to that. Even a book spine says a lot. Pile up some books on your nightstand, and you have instant  access to your chosen bedtime stories. Pile them on your work desk, and chances are high, that you’re after business, not leisure. We’re talking spatially customizable 1click access to your readings.

Think about it: ebook usability really sucks. I’m not talking about the reading experience, which is constantly getting better and better (notable exception: iBooks with it’s kitschy when-I-grow-up-I-want-to-become-a-real-book page design). The real un-fun part is the time before you start with chapter one. Turn on, search, see results with tiny images and standardized typeface. And, come on, who in real life is so anal to sort his printed library by title, author AND/OR category?

List this: compulsive sorting disorder.

But it’s getting weirder. You have to know where you bought the book, or at least the file format. Buy some music, and any mp3-player will do the job. Buy Neal Stephenson’s highly recommendable Reamde on Amazon, and it will live in your Kindle or Kindle app and only there, not to forget. Buy Graeber’s Debt at the publisher’s store, and it will sit either in your iBooks or Stanza app or both or anywhere else, depending on which app you synced it in, but definitely not in the Kindle or the Kindle app.

Can we, at least. please get this sorted out?

How the iPod killed the music industry as we knew it

On Saturday, November 10 2001, Steve Jobs killed almost saved the music industry as we knew it: the first generation of the iPod reached the Apple Stores.

iPod 1st Generation
This machine ♥ CDs.

So let’s flash back to those prehistoric times. For about two decades, the music industry had made a killing by distributing billions and billions of digital masters. The CD, as conceived in the 1980, wrapped up the vinyl album into a shiny digital disc and propelled sales figure to an entirely new level.

But now, at the turn of the millennium, the CD monoculture found itself under attack. The PC (formerly known as a strange device for decidedly uncool nerds) and the Internet (formerly known as a service for decidedly strange scientists) had tainted the love affair of the industry with all things shiny and digital.

And a monoculture it was. Let’s have a look at the US-sales figures in 2001. The CD delivered roughly 94% of all revenues. As we know from the world of agriculture, monocultures have their advantages. You can waltz with hugely oversized machines through fields the size of the Central Park profiting from economies of scale. The drawbacks are equally known. “Monocultures can lead to the quicker spread of diseases”, as Wikipedia drily states.

The music industry fought the digital pest of copying their freely distributed masters like any Idaho potato megafarmer would do. After ignoring the first signs of disease, they started crop dusting. As with any pesticide, spreading DRM and later on even (involuntary) infecting PCs with root kits had some serious side effects. Their legal crop dusting may have killed Napster. But in 2001, after several years of digital infights, legally buying and downloading music was still virtually impossible. (Look at the charts, based upon RIAA revenue figures: downloads will not even appear before 2004.)

The CD monoculture
The CD monoculture: making a bundle with bundles.

But back to 2001. “iPod’s built-in FireWire® port lets you download an entire CD into iPod in under 10 seconds and 1,000 songs in less than 10 minutes,” boasted Apple in their press release. Yup. Basically, the iPod was CD player on steroids – sans discs and drive. You filled it by ripping your CDs on a Mac (no Windows yet). Strike your CDs. Any CD was fine. (Or you gathered some music files by strolling through the darknets of the times. But that’s a different story)

Now, let’s have a look at the CD. The recorded music industry, as any media industry, extracts value out of content by selling it via a medium. It’s a productification process, but it does no stop there.

Traditionally, the music industry bundled their content either into a single (buy one medium, get two pieces of content) or an album or compilation (buy one slightly more expensive medium, get at least eight pieces of content). Broken down, an album equals to a price incentive of something like buying 10 for the price of six.

From the business perspective, getting consumers to buy such a bundle makes perfect sense. Especially, if you are in an increasing returns business like software or media: the production of the content is a one time expenditure. Duplication and distribution add only marginal costs. Hence, the more you sell, the higher the returns.

Music in 2010: downloads = mostly unbundled content.

The value proposition seemed to have worked pretty well. Can you spot the CD-Single, containing mostly four titles? In 2001 it’s this tiny little orangeish sliver down there on the right, with a share 0.6% of all sales. Good for the industry. Because the cost of producing and distributing the single equals pretty much the cost of the whole album – which generates much more revenue.

As we saw, the original iPod was still somewhat of positioned as a CD aggregating device, somewhat legally filled by buying CDs and putting your music onto your device. Yes, overall sales were declining. But the industry was still selling bundled content to the consumer.

On April 28, 2003 this was going to change. Opening as the iTunes Music Store Apple, albums were still available. And still, the price for the bundle was lower than buying just a single piece of content. But how did the consumer react?

iTunes
Killing the CD.

Have a look at the sales figures of 2010. The CD is shrinking fast, downloads are gaining just fine. But look at the product shares: 20% Download Singles vs 12.1% Download Albums. Essentially, more than two thirds of the formerly sold content bundles are replaced by single downloads.

iPod family
The iPod device family of 2011.

Coming back to the iPod. The device family still holds a market share of way over 75%. As pure software, it lives happily in every iPhone. The interface still honors the good old times of the album and the compilation. But buying music has massively changed. Be it the iTunes store, Amazon’s mp3 downloads, or any other digital music warehouse: single downloads rule.

Next time: what is really going on there? Is this sustainable? And can or should there be anything done?

O Sony, where art thou?

Remember Sony, inventor of mobile music (the Walkman, if you’re generation iPod), makers of shiny gizmos and all things transistorized? Yesterday, I spent one hour at Sony’s press conference T IFA Berlin, where all manufacturers and lovers of home electronics gather since the days when television was the next big thing and very much black and white.

What’s the big deal? Sir Howard Stringer smoothely presented the vision of the wholly integrated media and entertainment empire. Hardware, content, and distribution, all under one roof. Now, what’s missing here? Right: Software. Entertainment hardware is almost a commodity. The differentiator is software and the User Interface. Why? Look at your smartphone, your tablet, your smart tv: big shiny screens, with slightly different form factors.

The new Sony tablet has a nice new form factor. But turn it on, and it’s an Android device. The new smart TVs look definitely nice. Turn it on, and it’s an Android device. Boot a Sony PC: hello Microsoft Windows. Start the Playstation: it’s a Sony.

Sure, you still could find a way to combine all those different worlds. Integrating all battling units into one large consumer unit sounds like a smart and ambitous move.
But the press conference mostly proved, that running a vertically and horizontally and however else integrated empire is not a silver bullet. One hour with three talking heads, from smoothely presented CEO vision to well rehearsed droning on features of products with gripping names like XYZ-123 is definitely nothing I would expect from a media empire with a gazillion tv, movie, and music superstars in their employ. Text to speech in front of a smurfish-blue background does not substitute for an entertaining event. And I won’t start to compare this hour with the product presentations of a certain Man in Black.

Software Patents

What are software patents good for? NPR’s “This American Life” explained nicely the
theory behind the software patent business
.

A more hands-on approach comes from Sanjay Jha, CEO of Motorola Mobility. After hinting that Motorola could use its bazillion mobile patents to tax some of its Android competitors, Google defended its Android franchise by buying
the whole of Motorola Mobility
.

Open Source and App Stores

In an interesting talk with pkabel, Peter tried to convince me, that the webwide perception of free will for the next foreseeable keep digital stuff, well: free. Partly, I wholeheartedly agree.

  • Free is powerful, and
  • any digital commodity will sooner or later end up there.

So one question can be: where and for whom could we prevent commoditization? A core feature of a digital file is infinite, lossless copies. So we’re talking not just commodity, but commodity with an endless supply. For a while, copy restrictions for digital files seemed to some parties a doable, systemic approach to kill this killer feature. Sure. Unfortunately, this was a bit like trying to protect the water business by demanding that all water has to be kept in its solid form. Yes, ice is great. But you have to deep freeze the world, otherwise your business is going to melt.

But, does everything digital necessarily become a commodity? I guess that’s where we started to disagree. Disclaimer: some people will know, that I’m working on a platform which bets on the value of a digital creation – emphasizing the creation, and not the digital part.

As we have no proof or traction or even a counter-example yet, I’m always glad to hear of people thinking into comparable directions. A couple of months ago, Robert Douglass, Open Source developer of Drupal-Core fame, proposed his idea of a Drupal App Store to the Drupal community. Which led to some rather heated exchanges. Now can listen to his reasonings in this podcast by Lullabot Consultants.

You cannot be the cake and eat it, too

Beloved Twitter finds itself currently in a very awkward position. Twitter always acted as some kind of a weird, but successful crossbreed of for profit company and something resembling an open Internet protocol. A massive, cloud based, proprietary communications channel with a set of open APIs any developer could connect to.

While Twitter was struggling with its success, throwing all resources after keeping the service up and running, a plentitude of developers started to create a multitude of clients. This was great. A/B-testing a user experience is helpful. But Twitter managed something like an A/n-testing: With A being Twitter’s own dusty old web interface, and n all those other ways to work and interact with the platform.

But now, after 5 years of growth (and countless A/n-tests), the corporate part of Twitter had to step on the brakes. The artful balance of internal and external developments had to be shaken up. Buy why kill a very successful continuous development process, which externalized most of the cost and risk? After raking in funds like Scrooge McDuck into his money bin, Twitter finally needs a business model, which delivers the Dollar per user Twitter-co-founder Biz Stone promoted once.

In the meantime, the original post declaring war on the mainstream interface developers has disappeared (and resurrected). But I don’t think the core problem can be solved that easily. The high wire act of being Mozilla and Microsoft at the same time is becoming tricky. Or, to twist the old adage on cake ownership and nutrition: You cannot be the cake and eat it, too