Thursday, 30 July 2009

Thought experiment: International parcel post

Yesterday I had a chat with a colleague who's looking at transformative ways of reducing the cost of sending parcels to China. We came up with the following thought experiment, which was quite fun - I thought it was worth sharing.

We started out with the assumption there is a fixed cost of cargo planes along the route from the UK to China and a very much lower cost of sorting a parcel, which varies based on the labour cost in the territory the plane lands in. The main determining factor on the cost per parcel is the utilisation ("degree of fullness") of the cargo space in any given plane, the secondary factor being the number of times it is sorted. A parcel takes a pre-determined route through the system, with a fixed number of interchanges, independent of the utilisation of the planes that carry the parcel at each stage.

It occured to me that this is analagous to the switched telephone network; our thought experiment was to imagine that the international parcel post worked on a "best endeavours" basis, analogous to the IP communications protocol on which the Internet is based. In this scenario, parcels would arrive at a sorting office and be routed to another destination based on the utilisation of the cargo planes that were at the office, always seeking to maximise utilisation and therefore minimise cost per parcel across the whole system, while moving individual parcels to their destination in a "random walk" fashion.

We came up with a number of challenges. Firstly, routing depends on having accurate information about a parcel's destination that is easily (electronically) communicable to the sorting office. RFID tagging of parcels at the point of despatch is the simplest way of achieving this. The technology is well understood and unit prices are low.

Secondly, routing must be calculated based on knowledge of utilisation in the total system, meaning that a great deal of computer power will be required to run the system and that information on parcel volumes and airplane loading must be shared by all sorting offices. Neither problem seems insurmountable, the first part could easily be solved through cloud, grid or distributed computing with a commercial model based on a fixed price to route a parcel. The second part is political and based on corporate strategy and business model. True to form, if the largest couriers and carriers adopted it, the rest would have little choice but to follow suit.

The third challenge is about customer expectation. In a truly "best endeavours" system, it would no longer be possible to guarantee the delivery date and time of a parcel at the point of sending, unless a highly sophisticated probability- or chaos-theory based model could be constructed to predict likely delivery time. Alternatively, a QoS model could be forced on the system - my opinion is that this is a bad idea (I have a similar view of corporate NGNs that try and do the same thing, but that's for another day), however it could be done by having a sense of importance and priority on each parcel.

The third challenge made us think about pricing models. In our experiment, a customer could be charged a variable price for delivery of their parcel, perhaps within a given range. This would enable the carrier's pricing to become more real-time, enabling them to respond to demand in a more flexible manner. Similarly, the consumer will have better visibility of the trade-offs inherent in international shipping - price increases dramatically when time is the priority, whereas it falls precipitously if efficiency is maximised.

Billing will be an interesting challenge, particularly if it becomes factored into the routing algorithm. Similar systems are used for the exchange of international minutes between telecoms carriers, so the technology is understood. There may even be "parcel arbitrage" opportunities.

The primary upsides of this scheme surround efficiency, both financial and carbon-related. By emphasing utilisation of resources, there is less wastage and hence less cost, benefiting the system as a whole. In addition resiliance to disruption such as demand peaks, weather, strikes etc... would be markedly increased.

So, a really enjoyable exercise - I'm sure you can come up with other interesting implications of it. To close, I must say that the degree of cooperation required is probably beyond the World at this intermediate stage in our societal evolution - it'd be fun to try though...

Wednesday, 29 July 2009

Of lady's loos and AI...

I inadvertantly walked into the lady's loos at work today - in my defence, it seems the building's designers decided to mix up the sex's respective bathrooms by floor! Why was this revalatory? Well, I barely needed to push open the first of two doors before I knew something was amiss - by the time the second one was cracked open, I was already turning back. Pride intact, I might add!

"I knew something was amiss" got me thinking - there were no visual clues that this was not the gents (besides the huge sign on the door, but I was texting at the time and had my head down...). The only thing I can assume was that the smell was not quite the same, triggering a subconscious "this is not right" reaction. I thus successfully passed a context problem and prevented myself suffering (psychological) harm!

Now, my success in this simple task might seem trivial, however it immediately occurred that an pseudo-AI might find solving a similar context problem rather difficult. This is, if you like, a demonstration of the validity of the Chinese room thought experiment. I could construct a look-up table containing every possible characteristic of a lady's loo that had ever been encountered and yet it could inadvertantly walk into one tomorrow, given the right change in circumstances. It doesn't fundamentally understand the concept of the lady's loo and hence it is unable to make an judgement on whether to proceed in the context of the information it has available.

I also have started to wonder about context and AIs in general. Those subtle clues that my brain used to discover that I walked through the wrong door are just the tip of the iceberg for human beings. Most of us have an instinctive ability to read shifts in posture and tone in one another - witness the immediate recognition of hierarchy in business (and many social) groups. This is a result of our recent history as sub-liguistic pack animals: it makes sense for weaker members of the pack to detect subconsciously when the alpha animals are getting ready to give them a clip round the ear and beat a hasty retreat to reduce the risk of injury. If we did succeed in building true AIs, would they also have the same context recognition? Would they form natural heirarchies?

My immediate feeling is that they would not, unless there was an evolutionary rationale for such behaviour. I believe that it is possible to incite such behaviour by building certain basic instincts into an AI - desire to survive, for one, even if that takes the form of a desire to reproduce (and hence a need to survive). Furthermore, there is the need to induce situations where that survival could be threatened, but in such a way that the threat could be avoided through early recognition of the danger. I quickly reach a spiral of complexity when considering this - as creator we could programme what we like, but a truly self-aware, learning AI could simply reprogramme itself unless we had some means of ingraining instincts like above - AI subconscious. An interesting idea.

I digress and verge on the Asimov. True AI is many years away - further out than fusion power, fuel cells, mind altering mobiles and pretty much every subject I think about. Keeping to my self-control mechanism of "what's the commercial value of that", I've also started to wonder what the value of investing in artificial intelligence is. This line of thought is even more nebulous than the above, so I'll leave that alone while it crystallises. Apologies for the ramble - hopefully it stimulated some more organised thought for you :).

Monday, 27 July 2009

FMPD - a screen without a screen

In my last post about future mobile phone design concepts, I talked about the thin film contact lens as screen. This time I'll talk briefly about a more radical possibility - using neurological techniques to place images directly into the brain without light shining on the eye.

Before considering this concept, please take a look at this link, to a technology called Brainport. In brief, Brainport uses electrodes mounted on tongue to transmit images from a head-mounted camera to the brain, in effect enabling the blind to see. This is a laudable objective and a very clever product, for the device appears to work rather well, however this type of technology offers a tantalising possibility for the communication device designer of the future.

Just like the contact lens screen overlays images onto the line of sight, a neural feed like that suggested by Brainport could overlay images without any device in the visual field. In effect, the user would be surrounded by a sphere of virtual screen, but without being encumbered by a device attached to the head.

As ever, there are hurdles to get over before such a technology is ready for commercial exploitation. First of all, its unlikely that the tongue is a practical means of linking such a device to the nervous system - it's quite useful or other things, like eating and speaking, so we need to find another tap into the brain or reduce the size and intrusiveness of the tongue-mounted electrodes. This is a significant challenge as any invasive procedure will make a consumer device impractical for the mass market - perhaps there is some way of externally controlling neurons using e-m radiation? I'll do some research - if any one has any thoughts on this subject please yell!

Next time I'll post about controlling the phone of the future, moving away from the keyboard using non contact or mental control of devices.

Friday, 24 July 2009

Wireless power

This post is a quick intermission in my thoughts about future mobile design. When I talked about contact lens screens, I mentioned the need for wireless power supplies to connect devices to each other. The BBC today reported on a company called Witricity who have developed a wireless power system that connects the mains to devices using low frequency resonance.

I'd imagine that such a system is relatively lossy, however, when combined with ever more efficient batteries (driven in part by increasing demand from the automotive sector) or even portable fuel cells it could form the basis of a "personal power grid". A thought that occurs while writing this, is that the battery pack of such a device - or individual devices if the PPG* doesn't happen - could synchronise with power, just as they synchronise data over the air.

*: sorry, I can't resist a good acronym!

Tuesday, 21 July 2009

Future mobile phone design - next generation screen technology

Continuing the theme of future mobile design, stimulated by this link from MIT's Fluid Interface Group, this post is a quick look at some ideas for possible futures in machine-to-man communications. The screen of the future, if you like.

First off, a confession. I love my iPhone. It's a great piece of technology that just works; well enough to get me excited about handsets for the first time in years. It's also led to a spate of people wandering around, head down in their smart phone, writing emails, tweeting, surfing the web, playing games or another one of the tens of thousands of things you can get an app for. This is a shame, because the iPhone has lots of next generation functionality that's exciting to use, but has lost some of the mobility that made cellphones compelling in the first place.

The Fluid Interfaces Group device gets around this "heads down" problem by projecting an interactive screen onto a surface by means of a small projector (and presumably a camera of some sort to watch the user interacting and feed that back). It's a neat solution, however the thought of thousands of people wandering around projecting images onto each other and every spare surface seems a bit far fetched (sorry - I don't like to be negative about new ideas, and it is really neat!).

Here's a couple of technologies that are making their way through the research pipeline that may offer a new way of viewing content, sometime in the next 5-10 years. The first option is a bit conventional, in that it's a screen, but rather unconventional, in that it takes the form of a contact lens. In summary, the reasearchers at the University of Washington created a thin film, biologically safe contact lens, containing all the lights and circuitry required for an LCD screen. They then put it in a rabbit's eye. Quite what the temporarily bionic bunny thought of the new tech is unclear, however a human suitable equivalent offers fascinating possibilities by building on the capabilities of known technology - how many hundreds of millions of people globally wear contact lenses already?

The image from such a display would probably be similar to a head-up display in a car or aeroplane, overlaying information on top of the line of sight. If contextual tagging of objects was included, via a wearable camera, then people would instantly (and discretly) be able to access information about anything they're looking at, surf the web, instant message, or whatever.

There are hurdles, of course, primarily due to the lens' need for a power supply suitable to fuel a device that is actually mounted in the eye. Nanotechnology and micro mechanics researchers are begining to come up with generators that harvest kinetic energy to provide electricity to small devices, however a fully transparent version seems a little far off at this point (I found no patents or papers on such a thing in a brief scan). Induction is another possibility as it offers a way of wirelessly powering a device from a power supply contained elsewhere on the human body. Incidentally, induction (which is also the means by which near field payment technologies work) is the most likely means that the phone would communicate with the lens. Finally, the body itself is a reasonable conductor of electricity - could we become the copper cables of the future?

So, at the level of a rudimentary glance at least, a contact lens screen is possible - there are significant hurdles, but they all seem resolvable given time and a bit of ingenuity to integrate exisiting or imminent technology. This post is getting rather long, so I'll break here, with the promise that next time I'll post about non-screen solutions to machine-human communication. Hope the above was interesting - any comments or thoughts greatly appreciated.

Future mobile phone design - image recognition

A colleague recently sent me the following link describing a project by MIT's Fluid Interfaces Group, related to next generation interaction solutions. Besides being an interesting (and occassionally amusing) presentation, it reminded me of some thinking I participated in on a similar front, which I thought was worth sharing. For ease of reading, I'll break the subject into multiple posts, starting with image recognition.

Wouldn't it be great if my computer could see what I saw, tell me everything I want to know about it? I'd look at a product on a shelf and know everything about it. I could look at a billboard advertising a film and instantly know where and when I could see it. I'd never forget a name... This might sound like something out of Minority Report, but in fact the "wearable webcam" concept has been around for some years now and I'm slightly surprised that its still not made it commercially.

Microsoft have been very active in this space with their Sensecam, which initially was a simple camera on a neckchain that took an image every 5 or 10 seconds (meaning an end to lost keys, pens, phones etc...), but has grown over the years to incorporate body monitoring, GPS and so on. Without detailed discussion of the concept's advantages and disadvantages, active recording of daily or even second-by-second activity has been practical for some years, but hasn't come to market for some reason or other.

Contextual analysis of images is much more interesting and challenging. From my perspective this is a logical extension of semantic search (as practiced by search engines). At the moment, search engines identify and link pictures at a macro level by reading their metadata tags (user added). Live images sadly lack meta tags, so recognising them depends on being able to rapidly match images. Again, Microsoft Labs have an interesting research project related to this, called Photosynth. This is likely to be a very processor intensive task as pictures inherently contain far more data than text, although location information will doubtless help to reduce processing time by reducing the initial size of the searched database to results returned to other users in the general vicinity.

As the MIT team suggest, the Cloud is a realistic option for image search, provided that sufficient mobile data bandwidth exists at the user's location. That said, solid state storage is becoming ever more capacious, smaller and cheaper, so local caching of an encyclopedia of common 3D images is likely to be commercially possible in the next few years. From a commercial perspective, the opportunity for monetisation of this technology seems clear - location based advertising delivered at the point of identification, so that if I'm looking at a car, I can see the web link that will tell me all about it. Since banner advertising is a zero-sum game of sorts, I'm afraid the money is likely to come from TV and press advertising budgets...

My conclusion on contextual search is that it is currently possible, but will not be commercially feasible until deep, widespread 4th generation mobile networks are available - 2011/12 or thereabouts in the most developed markets. Similarly, monetisation will depend on linking adverts to images in the same ways proposed for monetisation of online VOD.

The combination of always-on, line-of-sight image capture and rich, rapid identification could be a true killer app for mobile data, however its value will be vastly dilluted if it is not combined with next generation screen technology. More on that next time.