Tuesday, 22 September 2009

Digital Britain: is it time for more radical spectrum policy?

Following the thoughts on the 'green-ness' of telecoms companies, it struck me that the forthcoming auction of digital dividend spectrum in the UK is a great opportunity for the Government and Ofcom to stop tinkering round the edges of both the green agenda and Digital Britain and take some positive action.

Auctioning the spectrum (suitable for LTE) to the highest bidder is a safe move, but it won't bring us any closer to either meeting the pledge to halve our carbon emissions by 2050, or the rather shorter term objective of giving everyone the option of connecting the Internet with reasonable bandwidth. Instead, I propose a scheme based on rewarding operators for being part of an industrial and social commons.

The carrot in this scheme is free spectrum for cellular LTE use. The value of this is yet to be established, however we have many data points to choose from, for example:
  • GSM licenses in the UK cost £142,560 per 2*200KHz slot per annum. This adds up to about £16m per operator;
  • 3G licenses were settled by upfront payments to HM Treasury of about £22b, split 5 ways (this was however, due to over excitement on the part of the bidding parties, amongst other things)

I prefer the first data point, or schemes such as that in India, where the license costs a small percentage of revenue, hence derisking investment (to a point). My suspicion is that Ofcom will favour an upfront capital payment for a chunk of spectrum. My prediction is that a 2*20MHz slot will cost the buyer between £300m and £500m.

In any case, in my scheme the spectrum is free, provided the following criteria are met:

  • The network must be demonstrably carbon neutral within 3 years of the license being awarded. This condition should include all operations and cannot be met by carbon trading or offset schemes;
  • All participating operators must agree to provide data on handset movement within cells to an aggregator, which will turn the information into a feed, similar to RDS, that enables efficient traffic routing applications. I see this as a similar 'social technology' to GPS;
  • Operators must commit to providing broadband to the final 10% identified in Digital Britain. Technical considerations will determine how fast this is, but I suggest a minimum of 512kbit/s, enabling eGovernment and other initiatives in the future. The range of LTE and the suitability of 800MHz should make this eminently feasible. I also suggest that a single rural network would be beneficial to achieve this (but that's another story)

These may or may not be the right criteria, but my opinion is that the sentiment is correct. We face an environmental crisis - that is nearly certain - and in the Government's eyes also a 'digital divide'. Conventional regulation and policy making will not jerk industry into action. It will not lead to innovation on green issues, or sudden decisions to act anti-commercially.

Neither do I believe that government should intervene in a command economy sense. What commercial entities need is a nudge in the right direction. Consumers will not do this alone as they are not experts in the technologies or supply chain for the services they love. It is up to the Government and Ofcom to set the UK on the right course and in this case, that does not mean taking hundreds of millions off the shareholders of MNOs.

Monday, 21 September 2009

Industrial commons & automotive social networking

I've been asked to contribute of a debate on 'green telecoms', specifically how the telecoms industry can enable other industries as they seek to reduce carbon emissions. A couple of thoughts I've had on the subject:

First, should the telecoms industry contribute to the industrial commons by making certain data available to all, in the same way that GPS signals are open to anyone with a receiver? Tom Tom's HD Traffic service is one service that makes use of data on handset movement to route traffic. Could this data benefit everyone, reducing jams and hence emissions?

Second, is there a benefit to a short range car-to-car communications system, a kind of automotive social networking that enables ECUs to communicate with each other? Besides the obvious benefits around collision avoidance, could such ad-hoc networking help manage traffic flow and even enable the sharing of services such as GPS routing and even in-car entertainment? Sounds interesting conceptually - the EU has even set aside frequency for it at 5.9GHz.

There's most definitely money in this for OEM manufacturers to put chips in every vehicle whether at build or as a retrofit. I also wonder whether there is a micro-payment model for exchanging access to services between vehicles. The challenge to this would be identity management, however accounts could easily be tied to vehicle registrations to get around this.

I'll feed back how the debate goes...

Tuesday, 4 August 2009

Film distribution: how to combat the demise of DVD

I had an interesting (and informal) chat with the VP Sales of a large film distributor yesterday, whose responsibilities cover monetisation of content rights post-theatre. We covered many subjects, but he seemed most concerned by the ongoing decline in the DVD market; where revenues are falling due to lack of demand, BluRay is failing to prop up the market and the industry appears to be suffering from a lack of vision, similar to that which has led to the gross destruction of value in the music rights business.

This is a classic strategic problem and one which cannot be solved simply. I am not, in case it was unclear, an advocate of the cognitive school of strategy! Instead, I thought it'd be fun to lay out how I'd use a hypothesis-based approach to seek and select options for turning around the post-theatre film monetisation business.

To start with, I see four broad value models in this business: "buy-to-own" sales of DVDs; "pay-per-view" either through video-on-demand or rental; subscription TV services (such as Sky Movies) and finally ad-funded free-to-air TV. Although it has little bearing on the pre-analysis part of the exercise, I suspect that the BTO market is in decline, less so PPV and FTA and subscription is more-or-less neutral. I haven't done the research, so these are just educated guesses :).

There are a bunch of big trends in the developed markets that will develop to a greater or lesser extent over the next 5 years. These are (in no particular order): ubiquitous, very fast broadband (24MBit/s+); appearance and penetration of IP-connected domestic TVs; deployment of true-broadband cellular networks; accelerating multi-dimensional competition for distribution and monetisation of content from trad' and non-trad' players.

Based on these trends, I'm going to suggest two big assumptions that need to be tested on the way towards hypotheses. (A1) Market evolution will tend to converge all four of today's value models. (A2) Browser-based delivery will be the dominant distribution method for movies in 5 years.

Finally, a couple of hypotheses that we'd need to test if we were going to look at developing a strategy for this market. (H1) Exclusive self-aggregation is the most value-creating business model for post-theatre movie distribution. (H2) Maximum yield is achieved by an ad-funding business model for the first 30 days of release, subscription on demand for archive with the option to purchase.

A note on H1 - by "exclusive self-aggregation" I mean a model similar to Hulu, by which a group of major studios create a joint venture for distribution of content online and do not sell their content elsewhere. This requires co-operation - as Aesop would have put it: united we stand, divided we fall... H2 is actually not strictly required, but I find hypothesing on value models focusses the mind on how money will actually be made!

To establish whether the validity of these hypotheses, you'd need a well constructed value model. What I'd seek to get from this model is a simple set of assumptions you'd need to believe in, in order for the hypothesis to be true. A well constructed model is also useful as it allows the counterfactual to be examined, as well as helping to establish a picture of the effects of one or more studios not joining such a partnership, whether this is a winner-takes-all market or otherwise and so on.

As always, I hope that was interesting - any comments on the hypotheses, assumptions or anything much appreciated.

Thursday, 30 July 2009

Thought experiment: International parcel post

Yesterday I had a chat with a colleague who's looking at transformative ways of reducing the cost of sending parcels to China. We came up with the following thought experiment, which was quite fun - I thought it was worth sharing.

We started out with the assumption there is a fixed cost of cargo planes along the route from the UK to China and a very much lower cost of sorting a parcel, which varies based on the labour cost in the territory the plane lands in. The main determining factor on the cost per parcel is the utilisation ("degree of fullness") of the cargo space in any given plane, the secondary factor being the number of times it is sorted. A parcel takes a pre-determined route through the system, with a fixed number of interchanges, independent of the utilisation of the planes that carry the parcel at each stage.

It occured to me that this is analagous to the switched telephone network; our thought experiment was to imagine that the international parcel post worked on a "best endeavours" basis, analogous to the IP communications protocol on which the Internet is based. In this scenario, parcels would arrive at a sorting office and be routed to another destination based on the utilisation of the cargo planes that were at the office, always seeking to maximise utilisation and therefore minimise cost per parcel across the whole system, while moving individual parcels to their destination in a "random walk" fashion.

We came up with a number of challenges. Firstly, routing depends on having accurate information about a parcel's destination that is easily (electronically) communicable to the sorting office. RFID tagging of parcels at the point of despatch is the simplest way of achieving this. The technology is well understood and unit prices are low.

Secondly, routing must be calculated based on knowledge of utilisation in the total system, meaning that a great deal of computer power will be required to run the system and that information on parcel volumes and airplane loading must be shared by all sorting offices. Neither problem seems insurmountable, the first part could easily be solved through cloud, grid or distributed computing with a commercial model based on a fixed price to route a parcel. The second part is political and based on corporate strategy and business model. True to form, if the largest couriers and carriers adopted it, the rest would have little choice but to follow suit.

The third challenge is about customer expectation. In a truly "best endeavours" system, it would no longer be possible to guarantee the delivery date and time of a parcel at the point of sending, unless a highly sophisticated probability- or chaos-theory based model could be constructed to predict likely delivery time. Alternatively, a QoS model could be forced on the system - my opinion is that this is a bad idea (I have a similar view of corporate NGNs that try and do the same thing, but that's for another day), however it could be done by having a sense of importance and priority on each parcel.

The third challenge made us think about pricing models. In our experiment, a customer could be charged a variable price for delivery of their parcel, perhaps within a given range. This would enable the carrier's pricing to become more real-time, enabling them to respond to demand in a more flexible manner. Similarly, the consumer will have better visibility of the trade-offs inherent in international shipping - price increases dramatically when time is the priority, whereas it falls precipitously if efficiency is maximised.

Billing will be an interesting challenge, particularly if it becomes factored into the routing algorithm. Similar systems are used for the exchange of international minutes between telecoms carriers, so the technology is understood. There may even be "parcel arbitrage" opportunities.

The primary upsides of this scheme surround efficiency, both financial and carbon-related. By emphasing utilisation of resources, there is less wastage and hence less cost, benefiting the system as a whole. In addition resiliance to disruption such as demand peaks, weather, strikes etc... would be markedly increased.

So, a really enjoyable exercise - I'm sure you can come up with other interesting implications of it. To close, I must say that the degree of cooperation required is probably beyond the World at this intermediate stage in our societal evolution - it'd be fun to try though...

Wednesday, 29 July 2009

Of lady's loos and AI...

I inadvertantly walked into the lady's loos at work today - in my defence, it seems the building's designers decided to mix up the sex's respective bathrooms by floor! Why was this revalatory? Well, I barely needed to push open the first of two doors before I knew something was amiss - by the time the second one was cracked open, I was already turning back. Pride intact, I might add!

"I knew something was amiss" got me thinking - there were no visual clues that this was not the gents (besides the huge sign on the door, but I was texting at the time and had my head down...). The only thing I can assume was that the smell was not quite the same, triggering a subconscious "this is not right" reaction. I thus successfully passed a context problem and prevented myself suffering (psychological) harm!

Now, my success in this simple task might seem trivial, however it immediately occurred that an pseudo-AI might find solving a similar context problem rather difficult. This is, if you like, a demonstration of the validity of the Chinese room thought experiment. I could construct a look-up table containing every possible characteristic of a lady's loo that had ever been encountered and yet it could inadvertantly walk into one tomorrow, given the right change in circumstances. It doesn't fundamentally understand the concept of the lady's loo and hence it is unable to make an judgement on whether to proceed in the context of the information it has available.

I also have started to wonder about context and AIs in general. Those subtle clues that my brain used to discover that I walked through the wrong door are just the tip of the iceberg for human beings. Most of us have an instinctive ability to read shifts in posture and tone in one another - witness the immediate recognition of hierarchy in business (and many social) groups. This is a result of our recent history as sub-liguistic pack animals: it makes sense for weaker members of the pack to detect subconsciously when the alpha animals are getting ready to give them a clip round the ear and beat a hasty retreat to reduce the risk of injury. If we did succeed in building true AIs, would they also have the same context recognition? Would they form natural heirarchies?

My immediate feeling is that they would not, unless there was an evolutionary rationale for such behaviour. I believe that it is possible to incite such behaviour by building certain basic instincts into an AI - desire to survive, for one, even if that takes the form of a desire to reproduce (and hence a need to survive). Furthermore, there is the need to induce situations where that survival could be threatened, but in such a way that the threat could be avoided through early recognition of the danger. I quickly reach a spiral of complexity when considering this - as creator we could programme what we like, but a truly self-aware, learning AI could simply reprogramme itself unless we had some means of ingraining instincts like above - AI subconscious. An interesting idea.

I digress and verge on the Asimov. True AI is many years away - further out than fusion power, fuel cells, mind altering mobiles and pretty much every subject I think about. Keeping to my self-control mechanism of "what's the commercial value of that", I've also started to wonder what the value of investing in artificial intelligence is. This line of thought is even more nebulous than the above, so I'll leave that alone while it crystallises. Apologies for the ramble - hopefully it stimulated some more organised thought for you :).

Monday, 27 July 2009

FMPD - a screen without a screen

In my last post about future mobile phone design concepts, I talked about the thin film contact lens as screen. This time I'll talk briefly about a more radical possibility - using neurological techniques to place images directly into the brain without light shining on the eye.

Before considering this concept, please take a look at this link, to a technology called Brainport. In brief, Brainport uses electrodes mounted on tongue to transmit images from a head-mounted camera to the brain, in effect enabling the blind to see. This is a laudable objective and a very clever product, for the device appears to work rather well, however this type of technology offers a tantalising possibility for the communication device designer of the future.

Just like the contact lens screen overlays images onto the line of sight, a neural feed like that suggested by Brainport could overlay images without any device in the visual field. In effect, the user would be surrounded by a sphere of virtual screen, but without being encumbered by a device attached to the head.

As ever, there are hurdles to get over before such a technology is ready for commercial exploitation. First of all, its unlikely that the tongue is a practical means of linking such a device to the nervous system - it's quite useful or other things, like eating and speaking, so we need to find another tap into the brain or reduce the size and intrusiveness of the tongue-mounted electrodes. This is a significant challenge as any invasive procedure will make a consumer device impractical for the mass market - perhaps there is some way of externally controlling neurons using e-m radiation? I'll do some research - if any one has any thoughts on this subject please yell!

Next time I'll post about controlling the phone of the future, moving away from the keyboard using non contact or mental control of devices.

Friday, 24 July 2009

Wireless power

This post is a quick intermission in my thoughts about future mobile design. When I talked about contact lens screens, I mentioned the need for wireless power supplies to connect devices to each other. The BBC today reported on a company called Witricity who have developed a wireless power system that connects the mains to devices using low frequency resonance.

I'd imagine that such a system is relatively lossy, however, when combined with ever more efficient batteries (driven in part by increasing demand from the automotive sector) or even portable fuel cells it could form the basis of a "personal power grid". A thought that occurs while writing this, is that the battery pack of such a device - or individual devices if the PPG* doesn't happen - could synchronise with power, just as they synchronise data over the air.

*: sorry, I can't resist a good acronym!

Tuesday, 21 July 2009

Future mobile phone design - next generation screen technology

Continuing the theme of future mobile design, stimulated by this link from MIT's Fluid Interface Group, this post is a quick look at some ideas for possible futures in machine-to-man communications. The screen of the future, if you like.

First off, a confession. I love my iPhone. It's a great piece of technology that just works; well enough to get me excited about handsets for the first time in years. It's also led to a spate of people wandering around, head down in their smart phone, writing emails, tweeting, surfing the web, playing games or another one of the tens of thousands of things you can get an app for. This is a shame, because the iPhone has lots of next generation functionality that's exciting to use, but has lost some of the mobility that made cellphones compelling in the first place.

The Fluid Interfaces Group device gets around this "heads down" problem by projecting an interactive screen onto a surface by means of a small projector (and presumably a camera of some sort to watch the user interacting and feed that back). It's a neat solution, however the thought of thousands of people wandering around projecting images onto each other and every spare surface seems a bit far fetched (sorry - I don't like to be negative about new ideas, and it is really neat!).

Here's a couple of technologies that are making their way through the research pipeline that may offer a new way of viewing content, sometime in the next 5-10 years. The first option is a bit conventional, in that it's a screen, but rather unconventional, in that it takes the form of a contact lens. In summary, the reasearchers at the University of Washington created a thin film, biologically safe contact lens, containing all the lights and circuitry required for an LCD screen. They then put it in a rabbit's eye. Quite what the temporarily bionic bunny thought of the new tech is unclear, however a human suitable equivalent offers fascinating possibilities by building on the capabilities of known technology - how many hundreds of millions of people globally wear contact lenses already?

The image from such a display would probably be similar to a head-up display in a car or aeroplane, overlaying information on top of the line of sight. If contextual tagging of objects was included, via a wearable camera, then people would instantly (and discretly) be able to access information about anything they're looking at, surf the web, instant message, or whatever.

There are hurdles, of course, primarily due to the lens' need for a power supply suitable to fuel a device that is actually mounted in the eye. Nanotechnology and micro mechanics researchers are begining to come up with generators that harvest kinetic energy to provide electricity to small devices, however a fully transparent version seems a little far off at this point (I found no patents or papers on such a thing in a brief scan). Induction is another possibility as it offers a way of wirelessly powering a device from a power supply contained elsewhere on the human body. Incidentally, induction (which is also the means by which near field payment technologies work) is the most likely means that the phone would communicate with the lens. Finally, the body itself is a reasonable conductor of electricity - could we become the copper cables of the future?

So, at the level of a rudimentary glance at least, a contact lens screen is possible - there are significant hurdles, but they all seem resolvable given time and a bit of ingenuity to integrate exisiting or imminent technology. This post is getting rather long, so I'll break here, with the promise that next time I'll post about non-screen solutions to machine-human communication. Hope the above was interesting - any comments or thoughts greatly appreciated.

Future mobile phone design - image recognition

A colleague recently sent me the following link describing a project by MIT's Fluid Interfaces Group, related to next generation interaction solutions. Besides being an interesting (and occassionally amusing) presentation, it reminded me of some thinking I participated in on a similar front, which I thought was worth sharing. For ease of reading, I'll break the subject into multiple posts, starting with image recognition.

Wouldn't it be great if my computer could see what I saw, tell me everything I want to know about it? I'd look at a product on a shelf and know everything about it. I could look at a billboard advertising a film and instantly know where and when I could see it. I'd never forget a name... This might sound like something out of Minority Report, but in fact the "wearable webcam" concept has been around for some years now and I'm slightly surprised that its still not made it commercially.

Microsoft have been very active in this space with their Sensecam, which initially was a simple camera on a neckchain that took an image every 5 or 10 seconds (meaning an end to lost keys, pens, phones etc...), but has grown over the years to incorporate body monitoring, GPS and so on. Without detailed discussion of the concept's advantages and disadvantages, active recording of daily or even second-by-second activity has been practical for some years, but hasn't come to market for some reason or other.

Contextual analysis of images is much more interesting and challenging. From my perspective this is a logical extension of semantic search (as practiced by search engines). At the moment, search engines identify and link pictures at a macro level by reading their metadata tags (user added). Live images sadly lack meta tags, so recognising them depends on being able to rapidly match images. Again, Microsoft Labs have an interesting research project related to this, called Photosynth. This is likely to be a very processor intensive task as pictures inherently contain far more data than text, although location information will doubtless help to reduce processing time by reducing the initial size of the searched database to results returned to other users in the general vicinity.

As the MIT team suggest, the Cloud is a realistic option for image search, provided that sufficient mobile data bandwidth exists at the user's location. That said, solid state storage is becoming ever more capacious, smaller and cheaper, so local caching of an encyclopedia of common 3D images is likely to be commercially possible in the next few years. From a commercial perspective, the opportunity for monetisation of this technology seems clear - location based advertising delivered at the point of identification, so that if I'm looking at a car, I can see the web link that will tell me all about it. Since banner advertising is a zero-sum game of sorts, I'm afraid the money is likely to come from TV and press advertising budgets...

My conclusion on contextual search is that it is currently possible, but will not be commercially feasible until deep, widespread 4th generation mobile networks are available - 2011/12 or thereabouts in the most developed markets. Similarly, monetisation will depend on linking adverts to images in the same ways proposed for monetisation of online VOD.

The combination of always-on, line-of-sight image capture and rich, rapid identification could be a true killer app for mobile data, however its value will be vastly dilluted if it is not combined with next generation screen technology. More on that next time.