Posts Tagged ‘Internet’


Wednesday, January 28th, 2009

It is only after reading Jason Scott’s F*ck the Cloud, that I realised that my two previous posts were in fact touching upon the same subject from two different angles. Though, I do not necessarily agree with all of Jason’s points because his definition of “The Cloud” seems a little vague, he has made several good points.

I have to admit I am still not sure what “The cloud” is — people seem to have many different views, a bit like for Web2.0. I note that the Wikipedia entry for cloud computing is move-protected due to vandalism, and a large number of techies prefer surrounding the term with inverted commas.

To clarify this post, I will refer to “the cloud” as the collection of software and hardware services which, by using the Internet as the underlying infrastructure, enable data to live outside the home/office space. It therefore relies on SaaS, virtualisation and Web2.0 to make it happen. This definition therefore includes GMail, Blog platforms, Social networks as well as Amazon EC2. To me, the term is simply a convenient way to refer to the current trend in Web development; even if given the lack of integration and interoperability, we should really use the plural form…

In my post on Google’s approach to business rationalisation, I was looking at the service provider’s end. I was wondering about the effect of shutting down Web services for a company which is actively promoting cloud computing. Because companies like Google and Amazon are at the steering wheel, people are watching their moves; especially service providers in search of a good business model. Freemium might be the way to go because it allows the service to reach critical mass, but I am sure that other models will emerge.

What I was implying was that providers are not only selling productivity, they are selling trust as well. The issue of trust isn’t new, but when you have control over the software and hardware, it is easier to take responsibility for the data. When users lose the direct control of their data, trust seems vital. After all, there could be thorny legal issues regarding data retention, liability, etc… At the moment, providers take no responsibility (read the fine print!) which makes it theoretically risky to utilise “the cloud” for anything that is mission critical or sensitive.

But people are bad risk assessors and if “the cloud” solves their problem, people will embrace it. As Dare Obasango mentioned on his blog, given the massive adoption, trust might already be an issue that is ready evaporate. To follow on his example, it took a few decades for people to realise that seat belts and airbags might be good ideas, and drink driving and speeding not such good ones… The fatality rate did not deter people from using cars: gradually, manufacturers made the cars safer and traffic authorities enforced rules and educated people.

In my other post, I mentioned an article published in a French magazine that reconstructed the life of an Internet user from all the information that he had left on the Internet. What I found interesting was that people were putting so much data, and therefore so much faith, in “the cloud”. Of course, in the case of social applications such as Facebook or Twitter, the data is generally trivial and can hardly be considered mission critical or sensitive — although, a lot of people would not appreciate to lose their list of friends, photos, business contacts, etc…

I was pointing out that the coming generation, not trailing the same history of problems as the previous ones, is making anyone 30+ sound grumpy — in fact most of the criticism was coming from experienced professionals. There was a time when people would print everything they type because their hard drive was not safe enough; nowadays, they say that only their hard drive (for personal users) or their data center (for businesses) is safe enough, they see “the cloud” as a big fluffy thing that will disappear. Maybe they would appreciate my old dot matrix printer.

My guess is that users will continue to take advantage of “the cloud”, and they will learn to decide what data is important. Businesses also will learn, and because they are better risk assessors, they will pay premium for better guarantees and service when needed. And providers will probably start providing better interoperability, and continue to adapt their services to a growing demand.

Trust (or lack thereof) did not affect adoption, but risk awareness eventually changed the behaviour of users and manufacturers. In that regard, what happened with the car industry will happen with the Web.

“The cloud” is no silver bullet, we just need to understand better when it is appropriate to use. It will gradually disappear though, but only because it is a silly term.

The Life Of – related articles

Tuesday, January 27th, 2009

A few articles worth mentioning relating to my previous post “The Life Of”.

From the New Scientist:

From ReadWriteWeb:

Interestingly, I just saw this:

I might add a few more later…

Google’s new year clean-up

Wednesday, January 21st, 2009

There have been many reactions to Google’s announcements that they will cut back on services and shelve several of their pet projects. I have to admit that I am actually not familiar with several of the services that are being shutdown and they are already lots of well-informed comments for each individual project, so this is more a general comment.

I understand the need for a company to focus on its core assets and shut down products which bring little revenue, especially during a downturn. In fact, I wonder why it took them so long to deal with Google Video given that YouTube is a lot more popular and offers a similar service. It was a redundant service, so I guess they could have merged it or removed it a lot earlier — they didn’t have to wait for the credit crunch.

I am sure that the same could be said for other services.

But it did not seem to be the case of several other services that were given the knife. A lot of bloggers felt that projects like Jaiku or Google NoteBook had a lot of potential and were never given a real chance; which led to speculation about Google’s intention in the first place — was it just to acquire new talented development teams? If that is the case, this confirms that Google is adopting the conventional strategies. Fine.

But I wonder if Google may also be shooting themselves in the foot…

Rationalising a network of services

A few years ago, I became really interested in small world networks: an example stuck to my mind because it was counter-intuitive. In one of the books there was a chapter about transport optimisation, explaining how having some lines operating at a loss enabled other lines to be more profitable, therefore generating a global profit.

There were several examples of transport networks eventually collapsing because instead of evaluating the network as a whole, lines were evaluated individually and independently. When non-profitable lines were chopped off or the frequency of service was reduced to reflect attendance, users would start looking at other options. But they would also stop using other lines which would then became less profitable. As the rationalisation process would continue, other lines would be chopped off until only a handful of profitable lines would survive or none at all. Globally though, the network system would not generate any more revenue, but it would provide a lot less service and not necessarily more efficiently.

It was because the structural properties of the network ensured that the whole structure was sustainable; these non-profitable lines were not valuated properly. The network had to be rationalised as a whole to be made more efficient, not purely assessed based on individual lines: management had failed to understand that these services were interdependent and what it meant to remove them. Commuters did not want to go just from one station to another, they wanted to go from one place to another and would choose the transportation means that best suited their needs.

What does that have to do with Google?

I certainly don’t think that Google is going to shrink anytime soon due to some drastic cost-cutting — they are smarter than that and I am sure that they do consider each project carefully. And of course, they have every right to shut down any service they please, they run a business after all and most of these services are provided for free.

But I think they may be neglecting the interdependence between their products and the relationship with their users. People don’t want to use just that particular service, they are trying to find solutions to their problems. Google uses the Web to provide solutions and I believe that the interconnected nature of the Web should make them consider business rationalisation differently.

One reason why we use Google services instead of others, is not necessarily because they are better but because we feel Google is more reliable. A start-up could go bust and since I don’t want to lose my data or change my habits, I will be more hesitant before committing to one of their services. As Pisani mentioned on Transnets (in French) , there is a moral contract between users and Google: by interrupting some web services, we are reminded that maybe we should not entrust the Internet giant with all our online documents, emails, blogs, videos, feeds, applications, etc…

We are in an interesting transition period where the Web is supplanting the Operating System as a development platform. By shutting down these services even if they operate at a loss, Google is giving a cold shower to those who believe in moving applications to the web and were counting on Google to spearhead the move. The trend towards web-based applications is not going to stop, but we now have a reason to think twice before using Google’s services and their APIs.

I am still glad that there are companies out there like Google to beat new ground, but their recent house cleaning is a good reminder that if we follow, we may also need to backtrack.

The Life of …

Sunday, January 18th, 2009

There was an interesting article in Le Monde today referring to an article published in Le Tigre, an independent French magazine. The article in Le Tigre was simply the reconstructed biography of “Marc L” – a person, the paper claimed, they chose randomly on the Internet using Google and the data collected on social web sites.

They posted a number of details about him such as his age, sexual preference, the schools he attended, the music he listened to, and his friends and partners over the last few years… It reads just like a mix between the people’s section of a newspaper and a wikipedia entry.

The persistence of information

All the information was legally obtained, since it was publicly available on the Internet – although, they claim many details were removed after he requested it. While we’re all aware (to varying degrees) of the trails we leave on the Internet, it is easy to forget that information that we thought was transient is still there and can be collected to produce our portrait and extract information about us.

The conversation we had on a forum 5 years ago may still be there somewhere, unflattering photos posted by friends may also still be there. Since the emergence of social websites, there have been many articles on the subject and on the impact of leaving too much information on these websites.

But as data is better organised, more searchable, but not so easy to remove; we have all the more reasons to be careful about what we say on the Net. Let’s not forget Google’s mission to organise the world’s information and make it universally accessible. Information is not transient.

A false sense of privacy

An email is no more secure than a post card as two legal secretaries in a Sydney law firm were painfully reminded a few years ago when their incendiary email exchange was forwarded to pretty much everyone in the company before appearing in overseas newspapers.

But the so-called “Generation Y” who like to collect hundreds, if not thousands, of friends on FaceBook or MySpace and maintain personal blogs do not seem to mind. In fact, I am surprised by how much information people are willing to share with others — strangers and friends alike.

It seems that a lot of people are induced into a false sense of privacy not realising how much the information they publish becomes public, and how much of it could be used against them.

Identity theft

Given that most of the information for the “security question” can be gleaned on the Internet, we are increasingly more vulnerable to identity theft. Notwithstanding the fact that a lot of people still use basic passwords and generally the same one to access their accounts.

Anyone’s personal information is, as marketers would put it, “at your fingertips”, so new techniques are going to be needed to protect us from identity theft. And I hope that the options will be better than coming up with passwords that include combinations of symbols and numbers which are impossible to remember, or providing even more personal information about ourselves.

The future

All this makes me wonder how this will evolve.

It is not impossible to imagine having some online reputation management software coming out helping people clean up their traces, possibly optimising their friends and links to improve their online identity or removing what should not be there.

On the other side, you could have increasing sophisticated automated online portraits for use by marketers and recruiters – primitive versions already exist. Or identity thieves could start using photos of us to use when face recognition becomes more widely available.

So should we learn not to disclose too much information about ourselves, just like we learned not to undress in front of an opened window? Or should we get used to watching the neighbour walking around naked?